* telnet: buffered output for async server
* telnet: make async buffer an option
* just use the queue containers directly
* try with simpler list
* exhaust buffers as much as possible in a single try
* don't forget to destroy the client object
* naming
* kill the connection earlier
* fix merge issues
* light: fix inconsistent transitions
- capture step variable for the current transition
- use one-shot timer, restart from the timer callback
* schedule inside provider func, not transition
- customize relay TOGGLE payload
- match payload string when receiving mqtt status message
- reference enum values instead of raw integers, spell out intended status
- remove dead code
amend #1885, capitalize `relayPayload...` suffix instead of using uppercase
add `relayPayloadToggle`
- remove "platform" key, see #1440. this implicitly sets schema to "basic". pending some other clean-up regarding json and mqtt queueing, other schema can be added down the line
- updated ws module queue elem to capture callbacks list, allows to pass more than one callback (for example, when they are generated on the fly as lambdas, see ha wsPost usage)
- modified method to send ha config to use global ws queue, fix#1762 problem with empty topics and ensure json allocation is consistent.
- use existing defines to set mqtt payload options. amend #1085, #1188, #1883 to use the set payload value. drop HOMEASSISTANT_PAYLOAD... defines.
- update MQTT_STATUS_ONLINE/OFFLINE and RELAY_MQTT_ON/OFF with runtime configuration
- filter payload strings so that the resulting yaml value is not interpreted as bool (python True, False)
- helper method for settings to streamline string values manipulation
* test: etharp_gratuitous
* proper checks before calling etharp_gratuitous, move includes
* disable at runtime
* ms values
* reload
* debug
* reword
* filter by ifnum instead of checking for AP mode
* drop station_if check
2.3.0/lwip1 builds netif->num increments on for each sta or ap
lwip2 keeps those constant, but that seems like a implementation detail
might break in the future anyways...