diff options
author | Vakul Garg <vakul.garg@nxp.com> | 2018-09-24 15:35:56 +0530 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2018-09-24 12:24:24 -0700 |
commit | 9932a29ab1be1427a2ccbdf852a0f131f2849685 (patch) | |
tree | b721798d416991241369628a76f48a872d08bb0a /net/tls/tls_main.c | |
parent | 094fe7392d6e0c8fa516dca451d4e005a2238e28 (diff) |
net/tls: Fixed race condition in async encryption
On processors with multi-engine crypto accelerators, it is possible that
multiple records get encrypted in parallel and their encryption
completion is notified to different cpus in multicore processor. This
leads to the situation where tls_encrypt_done() starts executing in
parallel on different cores. In current implementation, encrypted
records are queued to tx_ready_list in tls_encrypt_done(). This requires
addition to linked list 'tx_ready_list' to be protected. As
tls_decrypt_done() could be executing in irq content, it is not possible
to protect linked list addition operation using a lock.
To fix the problem, we remove linked list addition operation from the
irq context. We do tx_ready_list addition/removal operation from
application context only and get rid of possible multiple access to
the linked list. Before starting encryption on the record, we add it to
the tail of tx_ready_list. To prevent tls_tx_records() from transmitting
it, we mark the record with a new flag 'tx_ready' in 'struct tls_rec'.
When record encryption gets completed, tls_encrypt_done() has to only
update the 'tx_ready' flag to true & linked list add operation is not
required.
The changed logic brings some other side benefits. Since the records
are always submitted in tls sequence number order for encryption, the
tx_ready_list always remains sorted and addition of new records to it
does not have to traverse the linked list.
Lastly, we renamed tx_ready_list in 'struct tls_sw_context_tx' to
'tx_list'. This is because now, the some of the records at the tail are
not ready to transmit.
Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/tls/tls_main.c')
-rw-r--r-- | net/tls/tls_main.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index 06094de7a3d9..b428069a1b05 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -212,7 +212,7 @@ int tls_push_pending_closed_record(struct sock *sk, struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); if (tls_is_partially_sent_record(tls_ctx) || - !list_empty(&ctx->tx_ready_list)) + !list_empty(&ctx->tx_list)) return tls_tx_records(sk, flags); else return tls_ctx->push_pending_record(sk, flags); @@ -233,7 +233,7 @@ static void tls_write_space(struct sock *sk) } /* Schedule the transmission if tx list is ready */ - if (is_tx_ready(ctx, tx_ctx) && !sk->sk_write_pending) { + if (is_tx_ready(tx_ctx) && !sk->sk_write_pending) { /* Schedule the transmission */ if (!test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask)) schedule_delayed_work(&tx_ctx->tx_work.work, 0); |