Re: Early releases of lock in mamaSubscription destroy/deallocate logic
Does this actually address the issue with prematurely releasing the lock? Wouldn’t it be safer, as Mike suggested, to allow the thread that holds the lock to mark the object for destruction, but to defer the memory clean up until the unlock occurs?
Do you want to have a side conversation with Mike about his concerns – I think it might help?
Nigel Phelan | Corporate & Investment Bank | Market Data Services | J.P. Morgan
From: Openmama-dev@... [mailto:Openmama-dev@...] On Behalf Of Frank Quinn
Sent: 01 June 2020 22:16
To: Slade, Michael J (CIB Tech, GBR) <michael.j.slade@...>
Subject: Re: [Openmama-dev] Early releases of lock in mamaSubscription destroy/deallocate logic
Hi Mike – did you have any joy with testing this one?
I have put together something that looks like it address this (I modified qpid and MamaListen locally with various sleeps to verify behaviour). Turned out to be a little trickier than I had anticipated with the different paths through the state machine.
It works by assigning an atomic reference counter to increment for the bridge, and the Java wrapper, such that only when receiving a statement of disinterest from both will it release the memory. Since calling deallocate is an admission that you will no longer be using that memory any more, operating on a first past the post system should be safe.
There are many paths through the MAMA subscription state machine though so I'd appreciate if you could give it a test in your target environment before I merge it in. You can find my development branch for this change on:
Let me know if this works for you.
If you could take a look at implementing this, that would be much appreciated.
OK thanks Mike I thought user interaction was the primary problem but sounds like it's the finalizers.
This is where the subscription stuff gets hairy because there are a few different but similar areas at play here.
You should ideally unlock before the mamaSubscriptionImpl_deallocate to avoid undefined behaviour in the destroy yes. However...
In Java, when the GC kicks in, it fires off the JNI Java_com_wombat_mama_MamaSubscription_deallocate method which in turn calls mamaSubscription_deallocate which does acquire the subscription's mCreateDestroyLock
which could be (already) held onto while mamaSubscriptionImpl_deallocate is then called (!). So we actually already have undefined behaviour on that path when you look at it so that's effectively the same bug.
The path with Java destroy goes JNI Java_com_wombat_mama_MamaSubscription_destroy which calls mamaSubscription_destroy which will usually deactivate the subscription, then go on to invoke the destroyed callback. I was suggesting at this point, we *do* hang onto the lock until that callback is completed to protect the subscription object.
The path mamaSubscriptionImpl_onSubscriptionDestroyed is a different beast when the middleware is letting MAMA know that a subscription has been destroyed which may be via mamaSubscription_destroy -> mamaSubscription -> deactivate. In this case, we currently unlock before the callback. I was suggesting this should be moved until after the callback, but before the deallocate. If we hung onto the lock until after the deallocate, then we'd just be emulating the buggy behaviour already present in mamaSubscription_deallocate.
But yes when I look this through there is a more subtle issue afoot - because we have onSubscriptionDestroyed which will be called here (already inside the lock)
Now, since wlocks are recursive and these are from the same thread, this won't deadlock and should already be protected from the finalizer, but it also may not happen in this order depending on how the ondestroy callback is implemented in the bridge, so you could get something more like this:
... bridge takes this under consideration ...
... time passes...
impldeallocate() <-- this is where it looks like the GC has an opportunity to cause trouble?
If we are going to have multiple threads coming in like this, then yes I think an atomic reference counter in the subscription object to track when each resource depending on the subscription object has gained and lost interest would be preferable to holding onto the lock. Would fix the existing bug already in the finalizer too.
Is this something you're looking clarification on before implementing or you want me to have a look at implementing this?
Why do we need to unlock the mutex before the deallocate call? I understand that this call destroys the lock, but with the proposed wrapper around the lock the destroy call can defer the actual release of memory to the unlock call.
The early release could be exploited because the MamaSubscription Java wrapper calls deallocate in its finalize method. Due to this, the GC thread could acquire the lock and release memory in mamaSubscriptionImpl_deallocate while the lock-releasing thread is trying to use this memory to invoke the user callback.
Could you shed some more light on what exactly the problem is here? I have had a refresher in that area, and from what I can see, I can’t think of any reason to unlock that particular mutex before the user callback so I’m not opposed to moving until after that callback if that will resolve whatever issue you’re seeing? It will need to be before the deallocate though.
The only reason I could think of for why this was done in the first place is to avoid deadlock if the user calls something like mamaSubscription_deallocate or something else that uses that mutex from the callback, but then I guess they’d learn pretty quickly if they had fallen foul of that. Then I thought this might be somehow exploited in one of the Java or .NET wrappers but couldn’t find any evidence of that either. Plus that callback is more informative than anything else for the application, so if they are attempting to deallocate, and that’s causing funny business (or something like that?), we just need to make it clear in the callback documentation that they shouldn’t be doing that.
At first glance, that sounds like just “kicking the can down the road” — i.e., you’re still left with the problem of what to do with these wrappers such that you can tear them down in a thread-safe manner.
Having said that, if you have an implementation that works, I’m sure that others would be interested, so maybe put it up on GitHub or something?
My personal preference would be to try to find something on the intertubes that has been tested up the wazoo — concurrency is hard, and the hardest part IMO is tear-down of event sources/sinks.
This message is confidential and subject to terms at: https://www.jpmorgan.com/emaildisclaimer including on confidential, privileged or legal entity information, viruses and monitoring of electronic messages. If you are not the intended recipient, please delete this message and notify the sender immediately. Any unauthorized use is strictly prohibited.