K 10 svn:author V 6 adrian K 8 svn:date V 27 2011-08-15T03:41:48.415845Z K 7 svn:log V 2568 Do some relatively major changes to my software TX scheduling code. In the first cut, and inspired by some other code I have here which implements this, I decided to hide the per-TID locking behind the hardware TXQ lock. That worked, as the mapping for TID->TXQ is constant and predictable, but it meant the hardware lock is held a lot longer than it needs to be. This has caused all kinds of problems. A 'better' way (for values of 'better') would be to implement fine-grained locking of the per-node and per-TID state. Then those locks can be held as long (short) as they need to be. But for now, I'm going with another temporary solution designed to give me more breathing room whilst I port over the code. I've separated out the TX code into a sort of split setup, with the upper half being net80211/network code, and the lower half being the TX scheduling and completion. * Create a "TX sched" task, which will run the TX scheduling code as a task * Remove a lot of the hardware TXQ locking and asserting * Re-introduce the per-TID software TXQ lock * Since the entry pathways into this code now aren't locked behind the hardware TXQ locks, re-lock the TID software queue access * Re-introduce the short-held hardware TXQ locks, so top and bottom level fiddling of the hardware TXQs don't trample on each other. Now, the "top" level code (ie, anything which wishes to queue packets) will enter via ath_start / ath_raw_xmit / ath_tx_start / ath_tx_raw_start and will either queue directly to the hardware (and that's protected by the hardware TXQ locks) or be queued to the software queue via a call to ath_tx_swq(). ath_tx_swq() simply queues a packet to the software queue (protected by a software TXQ lock) and then the actual TX packet scheduling code is invoked in a task call to ath_tx_sched_proc(). ath_tx_sched_proc() handles scheduling; ath_tx_proc() handles TX completion and further scheduling; since neither of them run simultaneously I can avoid a lot of the complicated locking. This likely won't be the final solution (as I may end up introducing fine-grained locks anyway) but it does push (almost) all of the per-TID state and general aggregation state handling into the ath task, rather than trying to handle concurrent accesses from TX processes, RX/TX tasks and interrupt tasks. Note: net80211 node cleanup and node flush may still need some further locking. I'll look into that shortly. (as node_flush can occur during a scan, for example, and I haven't checked to see whether that runs within or separate to the ath taskqueue.) END