There are two approaches to processing news:
Batched newsfeeds. This is where a recipient host is "fed" news in batches controlled by the parent news hosts. This is the most efficient method of getting a large news feed, as it allows indexing to be done when necessary, and avoided when it's not.
"Online," by connection to an NNTP server. Here, articles are requested one by one as required from the main NNTP server. This approach represents the mechanism by which a news reader accesses news, an article at a time. On the basis of an article at a time, it's quite efficient.
The "slurp" method involves requesting articles for a newsgroup using an "online" NNTP connection. It is quite appropriate for endusers that wish to download some subset of newsgroups, and read material offline. Indeed, one would expect ISPs to encourage this, as it allows users to obtain news more quickly and log off, leaving the bank of expensive phone lines free for other callers.
When the NCAUG (Ottawa area Atari Users group) BBS moved to handling mail this way back in '83 in the .QWK format, the formerly overloaded 3-line BBS found that two phone lines was more than plenty. People had been calling in for 45 minutes to read and respond to mail and "conference discussions." With an offline reader, we could dial in, grab a mail "packet," and be off in a matter of 2 minutes. A decent QWK package would provide far superior message editing tools than the typical BBS "line editors" would, to boot.
However, slurping is significantly less efficient than batching from the server's perspective when someone tries to use it to acquire a (substantially) full news feed. Some people running low-ball ISPs "slurp" news from another ISP as a substitute for paying for a real news feed. Which means that they tie up NNTP connections and phone lines. And more interestingly, the parent server's performance suffers badly, because since NNTP servers really are optimized for reading news articles rather than building up NNTP "batches," having separate software optimized for building/loading batches.
It is likely that NNTP servers will soon have authentication requirements and quota limits to begin to deal with these and other such problems. Discussions and patches are ongoing with INND (the NNTP server software used by most news servers) in order to make it less friendly to people that abuse it by slurping/sucking.
At any rate, this all makes a satellite connection (or something similar) look rather attractive. Why pay more than their $100/month for a full newsfeed that, as it comes via satellite, consumes zero network bandwidth, and is not delayed by any local network problems?
The would-be ISP operator that doesn't consider this or similar news options seems rather foolish.
The December 1996 issue of Linux Journal has an article documenting how to implement a PageSat connection. The demonstrated scheme is to have two servers involved:
This can be a relatively "wimpy" machine as it only needs to handle transfer of data from the satellite onto the network. A 486/33 with 8MB of RAM, a half-gig of disk, and a network card seems adequate. News data is received, and periodically gets batched to the news server for inclusion into the news spool. That obsolete PC in the closet may make a nice "receiver server."
For a full news feed, this needs to be a big "beefy" machine with a fair bit of RAM and fast disk. PCs with IDE drives need not apply; this should involve fast SCSI (e.g. SCSI-2 or better) with multiple drives. And probably several of them. Preferably on the order of 10GB worth of space as a minimum.
A more recent and somewhat analogous thing is the TiVo service. A TiVo recorder tries to call in to the central server every night at a late hour when telephone lines are seldom in use. It pulls in TV schedule information for the next week or so, spooling and expiring this much like a news feed. By doing it at random times late at night, the servers at TiVo should be able to support rather a lot of users.