" Sponsor projects for the creation,enhancement, and promotion of freely available tools and components for the continuing development and promotion of Linux and other free computer operatingsystems."
The software is reportedly now available; Intel may, someday, release hardware...
The latest and greatest "big change" upcoming in the Linux world is the move from "libc 5.x" to"libc 6.0," which is based on the FSF's New GLIBC.
The GLIBCHOWTO describes various benefits, including:
The library becomes thread-safe, allowing applications to readily support multithreading, which is a particularly big deal in conjunction with SMP(Symmetric Multi Processing) ;
Improved math performance;
Architecture-specific code is more clearly separated from the majority that is architecture neutral, thus allowing portability to different CPU architectures, and particularly to 64 bit systems (LIBC5 never supported anything other than Linux/IA-32);
Greatly reduced dependence of applications on Linux-specific kernel code (e.g. - references to/usr/include/linux can be diminished or even eliminated);
The Linux C library 5.4.46 is a bug-fixing release for libc 5.4.44. There are no new features in this release.This is the last release of libc 5.4.xx. All my machines are now running RedHat 5.1 which is based on glibc 2, aka libc 6. Please check the glibc2 web site for details.
I really enjoyed working on the Linux C library with all the people over the years. Without you, Linux cannot be what it is today. Thanks to your support, we did it. Now it is the time to move on to libc6, aka glibc 2. I will be working on glibc 2 as a developer, but not as a maintainer, Ulrich has been doing a great job on maintaining glibc 2. I will try to answer all the concerns over libc 5 vs. libc 6 as well asthe bug issues related to libc 6.
|--Announcement from the former maintainer of "libc5" on July 19, 1998|
He also did an interview in Linux Gazette: The Answer Guy 32: The End of libc5: A Mini-Interview with H.J Lu
Linux libc was a crude hack that was necessary in its time (as back then, the GNU libc maintainers weren't sufficiently motivated to support Linux properly, and much quick and dirty hacking was needed to get a libc running).
It has rightfully been abandoned.
The decision to stop Linux libc development (and, more recently, maintenance) in favour of GNU libc was not taken by any cabal (be that Red Hat, Debian, the Men In Black, orBlack Helicopters Inc.), but was taken by the Linux libcmaintainers themselves, as Linux had matured to the point where doing things right became more important than doing things quickly.
The diet libc is a libc that is optimized for small size. It can be used to create small statically linked binaries for Linux on alpha, arm, hppa, ia64, i386, mips, s390, sparc, sparc64, ppc and x86_64.
Wine is an implementation of the Windows
Win16 APIs on
top of X and Unix. Think of Wine as a Windows compatibility layer. Wine provides
both a development toolkit (
porting Windows sources to Unix and a program loader, allowing many
unmodified Windows 3.x/95/98/ME/NT/W2K/XP binaries to run under Intel
Unixes. Wine works on most popular Intel Unixes, including Linux,
FreeBSD , and Solaris.
There are senses in which it is not an emulator; the differences are not likely to be easily understood unless you are deep enough into things that you'd be wanting to actually write some form of emulation system.
It seems to handle "old" Win16 programs fairly well; ongoing work is providing increasing functionality from the Win32 application programming interface.
I've had success running the PIM application Above and Beyond under WINE, and probably ought to try out some project management software.
WINE will probably always have some problems with Windows applications written by MSFT; they tend to use as non-portable interfacing to the Win16/Win32 APIs as they possibly can. The probable purpose is to "tune performance" as well as to foil attempts to run the applications under emulation systems like WINE; the resulting non-portability and instability of MSFT's applications even under their own operating systems is legendary. Nonetheless, successfully running Word atop WINE is the "Holy Grail" of the Wine efforts, and there have been increasingly successful attempts of late.
And providing the "flip side," there's LINE, which does roughly the equivalent to WINE, but in a Windows environment, emulating having a Linux system...
DOSEMU provides infrastructure for running MS-DOS applications under Linux.
Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. FreeBSD and Plan 9 ports are under way.
This is a version of the Linux kernel providing "hard" real-time tasking.
Real-time tasks are run as loadable kernel modules, which gives them the option to temporarily turn off interrupts, giving them better access to the CPU than any normal user process.
is another variation on the Real Time approach for Linux.
for RT support
This project seeks particularly to improve the robustness and speed of graphics handling under Linux.
I've written a bit about the fairly Frequently Asked Question of whether it would be worthwhile to try to create a "RAMDoubler."
The quick answer is that the sort of thing done under Windows/MacOS would be pretty pointless, but that there is some formal research indicating that small incremental performance improvements could result from a careful implementation of this...
Every PC-based operating system has its own "bootloader" program, a small program which the computer's BIOS can load, which then can go on and load the operating system proper.
The traditional one used for Linux is known as LILO, the LInux LOader.
FreeBSD has their own bootstrap program; OS/2 another; Windows95 starts up MS-DOS 7.0 (and its bootstrap program is rather unfriendly to other OSes); NT still another. Sun Microsystems developed one for workstation-class machines, based on FORTH , called OpenBOOT. See also dmoz.org on Boot Loaders for Linux
GRUB's claim to fame is that it is intended to be an OS-neutral boot loader, complying with the "MultiBoot" standard, torun on IA-32 machines. It was developed in order to provide boot services for the GNU Project Hurd OS.
GRUB has the neat property that since you can always fully specify boot properties from the boot prompt, you can never commit the accident of making your system unbootable.
It also offers a graphical boot screen, which is a pretty cool thing...
This project has implemented small microkernels for 80386 and 80486 architectures (with some work ongoing to move L4 to the Digital Alpha and SGI MIPSarchitectures) that are, unlike MACH, highly processor-dependent. This goes along with recent OS research that suggests that a microkernel must be tuned to the CPU's capabilities in order to get decent performance.
The developers of the L3 and L4 microkernels have chosen to host a single-server implementation of Linux in order to have some sort of base of measurement to compare to.
Linux was chosen because:
Porting a MACH -oriented server to L4 proved more complex
The port of Linux to OSF MACH provided a model for such development that proved useful
Linux represents a reasonable development environment, and allows L4 to readily self-host
The group was more familiar with Linux than with BSD
Linux/L4 apparently runs with similar performance to Linux, and is binary compatible with user-level Linux IA-32 programs. The two major classes of incompatible programs are:
This looks like a project to watch in the microkernel arena.
Unfortunately, there appear to be some licensing problems. It appears that the authors of L4 would like for it to be freely available in source form; unfortunately some of the research sponsors are somewhat reluctant to have this be the case. As a result, only the MIPS version of L4 is available as free software.
The first notable Linux "virus." This program would be more correctly termed a "trojan horse." While the source code is publicly available (and rumoured to soon be GPLed!), you certainly do not wish to run it as root.
Bliss attaches itself to all binary programs the user that runs it has access to (a bad thing), and logs its actions in /tmp/.bliss (a good thing) with a --bliss-uninfect-files-please option to allow you to request that it clean up after itself. (And it in fact does so, which is also a good thing.)
If modified to delete/corrupt programs/data, it could become an exceedingly insidious and dangerous program, and thus is certainly not a "safe" thing to run in general.
A whole of five, as of November 2000...
Used with many FSF programs to automate the configuration of software that is to be run on multiple platforms.
This describes "standard" places to put system components to permit maximum interoperability between Linux distributions
This is another init scheme where each service is represented by a script (unlike the BSD scheme), but where each service declares what services it is dependent on, or needs, which then results in the startup script for the needed services being run first.
The net effect is that init can run scripts in whatever order it wishes; they will request the services they need, thus resulting in a pretty clean system startup, without huge numbers of links, and without needing to use filenames to indicate the order in which services are to be started.
SystemServices is a replacement for the init system not just a bridge. We do provide backward compatibility with initscripts so they can still be used as SystemServices. But "native" SystemServices will be more capable than backward-compatible init scripts just because init scripts don't provide a lot of useful features.
Uses genetic algorithms to build process scheduling policies.
The main practical benefit of this is that it can load Linux as the BIOS, and thereby boot up the kernel in as little as a couple of seconds, rather than the usual situation of loading it in from disk.
Another BIOS replacement project
A project where the BIOS is replaced by Linux, so that you don't need to boot twice. Based on OpenBIOS
Red Hat Software has promoted the use of PAM as an authentication scheme for Linux that allows a high level of configurability of system security. The traditional method of authenticating oneself is to log in using user ID and password. In sensitive environments, one might want to use electronic passcard systems or other more sophisticated means of securing access. PAM allows creation of "Authentication Modules" that can use different means of getting authentication information.
Here is the basic idea. You buy a card with some TRAM (say a 256meg card) or you buy a computer in which all of the memory is inherently non-volatile. (Non volatile means that the data doesn't go away when the computer is turned off).
No need to work hard to force data out to disk; this allows async updates to be managed more efficiently...
Run Intel Linux Binaries on other Intel *IXOS's
a386 is a C programming library which provides a virtual machine. The virtual machine is an abstraction of an Intel 386 running in protected mode. Functions in the library correspond to privileged instructions of a CPU and access to hardware. The intended use for the library is to serve as a minimal hardware abstraction layer for operating system research.
a386 is implemented with ordinary Unix system calls, so operating systems using this library can be ran in a virtual machine on top of the hosting operating system. The library could also be implemented in terms of the privileged instructions provided by a CPU, and thus the operating system would be running on bare hardware.
Unlike VMWare, instead of trapping and emulating all the privileged instruction the operating system executes, they are inlined.
An essay discussing approaches to allow web servers to support on the order of 10,000 simultaneous clients.
The particular issue is of how to cope various requirements that fall out of this:
The clients will be issuing tens of thousands of concurrent I/O requests; how to manage this, as well as the fact that they'll all wind up waiting for responses;
How to control the code servicing each client.
Lots of options pop out, involving different combinations of:
Having single/multiple clients per thread;
Using blocking I/O, non-blocking I/O, asynchronous I/O;
Servicing requests from user space, or from the kernel...
Linux, as a result of code availability and widespread use, is being used heavily in research into different approaches to this.
By and large, the "simple" ways of dealing with the issues tend not to scale very well, whilst approaches like aio asynchronous I/O lead to the applications being difficult to write and debug.
The user-mode kernel is a port of the Linux kernel to its own system call interface. It runs in a set of processes, resulting in a user-mode virtual machine.
This allows you to do things like:
Kernel debugging without rebooting using ordinary tools like GDB,GPROF, GCOV
Playing with new kernels more safely
(hosted at SourceForge )
Developing an Open Source Virtual Machine
Enables the possibility of suspending a machine to disk under Linux. It needs neither APM or ACPI. Software suspend creates an image which is saved in swap partition(s). At the next boot, the kernel detects the saved image and restores it.
The really cool possibility is to suspend under one kernel version, and restore to another. If done quickly enough, this may allow upgrading kernels without the perceptible need to reboot...
Allows suspending processes and restarting them, possibly on another host, possibly running a somewhat different kernel version.