<table width="100%" cellpadding=0><tr>
<td width="30%">Back to <a href="index.html">Fetchmail Home Page</a>
<td width="30%" align=center>To <a href="/~esr/sitemap.html">Site Map</a>
-<td width="30%" align=right>$Date: 1998/03/06 04:17:44 $
+<td width="30%" align=right>$Date: 1998/03/26 21:51:29 $
</table>
<HR>
<H1 ALIGN=CENTER>Design Notes On Fetchmail</H1>
<H2>Multiple concurrent instances of fetchmail</H1>
-What would be required for this is a per-host semaphore asserted
-during each poll.<P>
+Fetchmail locking is on a per-invoking-user because finer-grained
+locks would be really hard to implement in a portable way. The
+problem is that you don't want two fetchmails querying the same site
+for the same remote user at the same time.<P>
+
+To handle this optimally, multiple fetchmails would have to associate
+a system-wide semaphore with each active pair of a remote user and
+host canonical address. A fetchmail would have to block until getting
+this semaphore at the start of a query, and release it at the end of a
+query.<P>
+
+This would be way too complicated to do just for an "it might be nice"
+feature. Instead, you can run a single root fetchmail polling for
+multiple users in either single-drop or multidrop mode.<P>
The fundamental problem here is how an instance of fetchmail polling
host foo can assert that it's doing so in a way visible to all other
happens if a fetchmail aborts before clearing its semaphore, and how
do we recover reliably?)<P>.
-I'm not satisfied that there's enough functional gain here to pay
+I'm just not satisfied that there's enough functional gain here to pay
for the large increase in complexity that adding these semaphores
would entail.<P>
<table width="100%" cellpadding=0><tr>
<td width="30%">Back to <a href="index.html">Fetchmail Home Page</a>
<td width="30%" align=center>To <a href="/~esr/sitemap.html">Site Map</a>
-<td width="30%" align=right>$Date: 1998/03/06 04:17:44 $
+<td width="30%" align=right>$Date: 1998/03/26 21:51:29 $
</table>
<P><ADDRESS>Eric S. Raymond <A HREF="mailto:esr@thyrsus.com"><esr@snark.thyrsus.com></A></ADDRESS>