EldoS | Feel safer!

Software components for data protection, secure storage and transfer

SendData sync method hanging

Posted: 02/18/2014 04:40:39
by Vsevolod Ievgiienko (Team)


I've run your test and nothing special happens. No exceptions and no deadlocks. Could you clarify how we can use it to reproduce your problem.
Posted: 02/18/2014 07:12:59
by steve cook (Standard support level)
Joined: 11/15/2013
Posts: 11

I've uploaded a screencast:

Posted: 02/18/2014 12:13:25
by Vsevolod Ievgiienko (Team)

Thanks for the detailed instructions.

Indeed a deadlock appears. In general the problem is a consequence of the fact that TElSSHTunnelConnection.SendData is called inside TElSSHServer.OnData.

In your code the solution is to implement SSHChat.Broadcast() method in such way that SSHSession.Broadcast() for each session will be called in a separate thread. As for me, the best way is to implement SSHSession.writer() method in opposite to SSHSession.reader() that will be running in a separate thread. SSHSession.Broadcast method will put a data into some internal buffer and SSHSession.writer() will monitor this buffer and call TElSSHTunnelConnection.SendData when new bytes are added to the buffer.
Posted: 02/18/2014 22:38:27
by steve cook (Standard support level)
Joined: 11/15/2013
Posts: 11

OK great - I can confirm that this fixes the issue.

However I thought that SendData was thread safe (because it already has a lock)?

Please update the documentation on this (its not covered by the example app, and the docs don't mention anything in the SendData method) - I think it will be useful for others.

Solution code:

        //solution - use blocking queue to ensure all sends come from a single thread

        void server_OnOpenShell(object Sender, TElSSHTunnelConnection connection)
            Thread writerThread = new Thread(new ThreadStart(writer));

        private BlockingCollection<byte[]> SendQueue = new BlockingCollection<byte[]>();
        internal void Broadcast(byte[] buffer)

        private void writer()
            while (Connection != null && Connection.CanSend())
                byte[] buffer;
                bool hasData = SendQueue.TryTake(out buffer, 1000);
            Console.Out.WriteLine("Closing writer thread");
Posted: 02/19/2014 02:33:39
by Vsevolod Ievgiienko (Team)

In fact SendData is thread safe. The problem is in how its used in the code.
Posted: 02/19/2014 03:30:16
by Eugene Mayevski (Team)

What happens in your code is that you concurrently access two TElSSHServer classes from two threads, and they lock each other. Moving SendData out of the event handler will solve the problem, and getting rid of the call to other thread's TElSSHServer will solve it as well.

Sincerely yours
Eugene Mayevski
Posted: 02/19/2014 03:34:03
by Ken Ivanov (Team)


The components themselves are thread safe - in that sense that a single bundle of TElSSHServer and TElSSHTunnelConnection objects can be accessed from different threads without any problems. However, the nature of the problem that you came across is slightly different, and in fact is not something that simple locking measures will help to overcome.

Let me explain you what is going on. When OnData event fires, the thrower of the event is already holding the lock and will not release it until the event handler returns. If two different TElSSHServer objects fire OnData at the same time, both of them are holding the corresponding locks for their threads. Now, if one of the OnData handlers, directly or further down the stack, attempts to call SendData() on a different server object, such call would block, because of that lock acquired by the other server before its OnData is fired.
Posted: 02/19/2014 04:06:44
by steve cook (Standard support level)
Joined: 11/15/2013
Posts: 11

Thanks for the detailed explanation - this makes sense and matches with the issues that have been reported.



Topic viewed 4028 times

Number of guests: 1, registered members: 0, in total hidden: 0


Back to top

As of July 15, 2016 EldoS business operates as a division of /n software, inc. For more information, please read the announcement.

Got it!