The answer to your original is negative - we do not have ability to make asynchronous Read calls. And, it is not planned. In fact, the fact that the QuickOPC API is synchronous (between the library and the developer; not necessarily internally), is among several "foundation" design decisions that are most likely not going to go away. They were made so that the API remains simple - while knowing perfectly that there are use cases where that becomes a limiting factor. Another example of such decision where simplicity wins is the fact that Browse returns all the results at once - even though that may mean very large and long browse (many nodes), which internally uses "continuation points" and could be therefore split to multiple smaller parts, if the API supported it.
Depending on how well your server would handle that, you might be able to overcome our limitation by creating a separate instance of EasyUAClient object for each target PLC, and set the "Isolated" property in all of them to 'true'. This will result in up to 200 separate OPC UA sessions to the server (which may go well, but may also bog down the server or the client - it needs to be tested out). With these separate EasyUAClient instances, you can then keep using synchronous calls, but if you use multiple threads, and one thread makes a Read on the EasyUAClient for one PLC and a second thread makes a Read on a different EasyUAClient for items that reside on the second PLC, they will execute in parallel and therefore even if one blocks, it will not block the other, which is the result you want.
Unless you are actually communicating to so many PLCs at the very same time, the actual number of concurrently opened sessions will be much lower, because QuickOPC closes the sessions that are not in use automatically.
One last note: With either the "full asynchronous" approach (some other library), or the above proposed multi-session approach, you still need to verify that the OPC server is actually capable of handling the requests in parallel, and won't serialize the requests internally. Imagine an async approach where your client will issue "Read Request 1" followed by "Read Request 2". A not-so-good server may simply start executing "Read request 1" AND block "Read Request 2" until it sends you "Read Response 1". In such case the asynchronicity will be there, but it wouldn't help you. The behavior of the server should better be checked before you settle on a way to go.