This page is based on material originally written by Kenneth R. Kinder in 1997, and further comments by Leon Brooks.
There are plenty of Linux (and Unix) myths that have been creeping around the Internet, presenting a danger to the growth of Linux, and its eventual overtake of Windows, DOS, and Macintosh. To prevent such a tragedy, this document seeks to dispel some of the myths that plague Linux's reputation. Based on what I have seen in on-line newspapers, magazines, newsgroups, and on the web, I have compiled a list of false-myths about Linux, and I explain where (if at all) these myths are based, and what the facts are.
See another version at Linux Myth Dispeller.
Note that it is fairly common for some of these questions to come not as true questions of fact, where someone is asking out of legitimate ignorance, but rather as trolls, questions asked maliciously to either be suggestive that Linux is inadequate, or to draw out ignorant answers.
In such cases, it may not be terribly useful to respond to such "politically motivated" questions, and responses should be considered in the somewhat "political" context of the Linux Advocacy mini-HOWTO, particularly the "Canon of Conduct" section that urges responsible advocacy behaviour.
Microsoft Windows NT Server 4.0 versus UNIX - a comparison mady by a Microsoft Certified Professional!
Probably the most poorly justifiable myths about Linux are for its installation setup. While Linux can be made to have a challenging installation setup, and was, in the dark mists of the past, somewhat difficult to get installed, most distributions have improved the process substantially. It will likely never be totally painless, as there are likely to be some decisions to make, but it is certainly not the "reboot, find driver, reboot" mess from which many versions of Windows suffer.
A: I installed Linux in less than an hour, and was up and running. The installation can be handled manually, in which case it may take a while (this would be copying each file set, and unpacking them by hand). However, nearly every distribution has an installation program that mealy prompts you for what to install, and some basic settings, then does all the work for you. After getting a Linux CD, you'll probably be up and running within an hour. In the old'en days, this has been true, and Linux can be made hard to install.
A: There are two crucial answers to this:
This was once true, but these things do not remain static.
The growth of companies that produce distributions competitively has meant that the process has improved substantially, and continues to improve.
It is common for a Linux install to take about 20 minutes, and while it is convenient to have an expert around, it's no longer forcibly necessary.
The last time I set up a PPP connection with the Debian ppp-config utility, it took not more than 3 minutes to get a CompuServe connection working, and that included some time actually connecting. This is a far cry from the days when it took a couple of hours searching for the right HOWTO and then recompiling the kernel, compiling and installing PPP utilities, and then piecing together CHAT scripts.
The OS that someone else installs is less of a hassle than the OS you have to install.
Recent versions of Microsoft Windows are not easier to install, and it is actually quite readily argued based on the preceding argument that Linux is likely to be substantially easier to install than the Microsoft alternatives.
But this misses the second critical point: OSes are complex beasts, and the only way that there won't be some complexity in their installation is if installation is Someone Else's Problem. People have no problems "installing Windows 98" because they don't have to, since it came preinstalled.
A: This myth is perpetuated by the fact that Linux is so customizable. Changing, recompiling, and other modifications that can only be done under Unix systems and not under Windows makes Linux an operating system that can be configured to do and be just about anything. When you install most Linux distributions, the OS is every bit as "set up" as Windows 95 or MacOS. The catch is, Windows 95 and MacOS have a limited set of further changes you can make. Experts, of course, may reconfigure more parts of such systems, perhaps by rewriting some of the utilities, but even everyday users are perfectly capable of configuring standard usage settings.
A: Setting up Linux can be a complex process, as a result of Linux being a powerfully flexible operating system.
That being said:
Makers of Linux distributions are continually improving the tools and defaults provided.
Some of these tools are intended to make configuration of system facilities reasonably approachable to relatively naive users.
It is fair to say that Windows and MacOS are more restrictive in the customizations that they offer. If your needs are simple, one's bias may swing in their direction as a result of that.
It is, unfortunately, fair to say that setting up X applications is still a little bit problematic.
Getting X apps to come up on menus is not always an automatic process, and the diversity of X window managers and their configuration languages makes it a challenge to automate this.
Happily, this has become more of a priority in recent years. Major distributions such as Debian, Red Hat Linux, and SuSE provide tools that "register" applications for inclusion in automatically generated menus, and also have tools to generate such menus for some of the more popular window managers.
A: To some extent, this can be said about any operating system. Saying however, that Linux has less install-time software than MacOS or Windows however is laughable. Linux distributions comes with all the development software, Internet software (besides Netscape), and system-related software you'll need. While Linux does come with games, some office-related software, however they do have something to be desired, but no more than Mac System or Windows. Because Linux is really a full Unix , it comes with everything you'd seen in a a standard Unix build too.
A: If anything, the extensive array of software offered in the average Linux distribution is so large as to be bewildering.
There tend to be a whole lot more development-oriented tools; the GNOME and KDE projects have been providing many of the "plain old desktop tools" that people have grown to expect; as these desktop environments improve, the situation will improve further.
A: After a "kitchen sink" install of a basic modern distribution, for example, the Mandrake 7.1 distribution (Install CD and Extras CD), which takes 26 minutes on a modern machine for both CDs, you are left with 2-3 gigabytes of software, most of it "everyday".
This includes, for Mandrake, at least two each of most office applications, such as word processor, spreadsheet, separate complete office suite (the Extras CD includes StarOffice 5.1), presentation software, vector and bitmap graphics editors, audio and video players, web browsers, file managers, PDF and PostScript viewers, professional publishing and typesetting system, about a hundred games, including a 200-plus-game-type Solitaire, dozens of screensavers, email, news, ICQ, IRC clients, remote access software (including encryption and platform-independent graphical remote access), calendars, security analysis tools, morphing and graphics processing, a digital camera interface, archiving , CD burning, emulators for running Windows, DOS, Macintosh and arcade-game programs, a CAD package, several program development environments, chemical, mathematical and physical modelling systems, and many others.
So far, I've only touched on the graphical tools available from the menus (under seven different window managers). Command-line tools for "everyday" tasks like reading email, processing words and calendaring can form a similar litany.
This is a pretty out of date list, as (virtually) nobody uses Mandrake 7.1, which dates back to 2000, anymore. These days, "everyone" uses Ubuntu, and likely does so by downloading a CD or DVD, and then adding things downloaded via an Internet connection. Names of software packages have changed a little, but really, the complexion of the result is no different. There's a boatload of potentially useful software, with more and better options than were the case in 2000.
The Linux system and Kernel are very powerful. Of all the fictional Linux myths, these are probably the most untrue.
A: Microsoft, and Apple would have you believe that their operating systems multitask (run more than one program at once). Using the term loosely, they do. Using the term strictly, they task switch only. Although more than one program maybe opened, you may notice that sometimes the system stops responding. Perhaps while mounting (detecting) a CD, or scanning a floppy drive. That's because of cooperative multitasking, as opposed to Linux's preemptive multitasking.
A cooperative multitasker (such as Mac System 9 or Windows) will give a program control of the system until the program chooses to give it back. Therefor, when a program is taking a while on a specific procedure, it can hang up the system, and deny other programs operating time. In a preemptive multitasker, a program is given a set number of clock cycles, then it is preemptived, and another program has the system for a set number of clock cycles. Linux is preemptive through and through. Mac System has absolutely nothing preemptive about it (although Apple claims the new OS will be partially preemptive). Windows 3.1 has a preemptive mouse only. Windows 95 is partially preemptive. Between Apple, and Microsoft, their only fully preemptive multitasker is Windows NT.
A: To properly answer this requires distinguishing between CMT (Cooperative MultiTasking) and PMT (Preemptive MultiTasking).
The two major classes of multitasking that people have been wont to argue over are cooperative multitasking and preemptive multitasking.
MacOS was long a characteristic example of "cooperative" multitasking; this is a form of multitasking where programs must be written to be "cooperative," relinquishing control voluntarily. Unfortunately, this approach does not always work out well; if one of those programs contains a bug, control may wind up getting locked up by one task that may have fallen into a "hole."
In contrast, preemptive multitasking, characteristic of time sharing systems that most certainly includes Linux, provides a model wherein processes tend to be written to pretend they have access to the whole machine, and the operating system kernel will share CPU resources with the various processes that are running, giving a few milliseconds to this process and then that one. If a particular process is "hogging" the processor, the OS will give that process its time slice, and then forcibly suspend it to give resources to other processes.
The need to be able to "take control" has some cost; it is likely that PMT systems run a few percent slower than an ideally configured cooperatively multitasking system. Unfortunately, CMT requires that all applications be well-behaved; one bad application could make your whole system unusable.
In practice, it turns out to be highly desirable to provide preemptive multitasking, as this behaves reasonably well virtually all the time.
Linux, by modelling Unix, provided a quite pervasively PMT programming model from its beginnings.
Microsoft Windows and MacOS started as CMT systems, and have gradually moved towards a PMT model, but still retain some of their CMT roots. Most notably, their graphical infrastructures retain structures that require cooperative/sequential behaviour.
Over time, their multitasking abilities are likely to improve to increasingly resemble those on Unix and Linux, but it is fairly silly to contend that their multitasking abilities eclipse Linux's.
"Over time", their multitasking capabilities have indeed come to resemble those on Unix and Linux. Most pointedly, MacOS now uses a form of BSD Unix for its kernel, and includes a wide array of traditional Unix tools that will be familiar to users of Linux. It no longer "suffers" from the use of cooperative multitasking. This particular "myth" is obsolete pretty much altogether.
A: A few DOS programs that act as their on OS may do some things faster than their Linux counter parts. This is simply because they aren't being multitasked by system. Other than that, Linux tends to be faster. An operating system that does operations comparable to Linux is NT. Linux is well over twice as fast as NT. Mac System is consistently slower, as are most Windows programs.
A: This particular issue has become a relative non-issue, as the number of DOS-specific applications in use has diminished over time.
There have been some controversial attempts to benchmark Linux versus Windows NT, with partisans on both sides claiming "victory;" a full and fair explanation of that lies outside the scope of this document.
A: Hardware is often ignored by other operating systems. On the other hand, Linux takes advantage of all the hardware it can. Sometimes, if you have defective hardware that other operating systems don't take advantage of, Linux will crash. This is to be expected. Claiming an OS should remain stable when your memory doesn't retain information is unrealistic. A properly set up Linux system that is running on good hardware will nearly never crash. This is because if the operating system doesn't bring itself down, nothing will. Programs can never crash the system under Linux, because of the way it's built with things like memory protection, instruction monitoring, and other devices built in to any true kernel. For example, in Linux the "General Protection Fault" error can only be triggered if your computer's memory is simply not keeping its information (in which case, you should return it to the factory).
A: The assessment of "crashes" must consider the underlying causes, as well as the net effect.
The extremely pervasive use, by Linux, of memory management hardware to manage paging, as well as the ability of Linux to use any otherwise-unused memory as disk cache means that Linux tends to make maximum use of available memory.
This tends to maximize performance, and also tends to maximize the likelihood of being influenced by memory of questionable integrity. If you've got failing hardware, Linux may find the failures before some other OSes. That cannot reasonably be assessed as a failure of Linux.
This amounts to very nearly the same thing; if a graphics card or hard drive controller are "flakey" either in terms of hardware quality or in firmware quality, the fact that they may fail is not a failure of the operating system.
Unfortunately, such failures can dump out arbitrary data into arbitrary locations in memory, thereby messing the system up very badly.
Segmentation Faults and Other Application Bugs
All too often, on Windows or MacOS, if a program crashes, this can readily pull down the rest of the system around it, as programs often are tightly integrated into the GUI which is tightly integrated into the OS kernel.
In contrast, if a program running on Linux has a bug, and in some manner "behaves badly," this normally has little or no permanent impact on the system at large. The interfaces to the OS kernel are reasonably "tightly guarded," graphical environment infrastructure has no special access to the kernel, and applications similarly have to "speak through guarded channels" to the graphical environment. And so, if a program should crash, this very rarely has any substantial impact on any of the other layers.
A: Final word: Any hardware that crashes Linux will almost certainly crash Windows NT or Windows 2000. On the other hand, Linux systems on which Windows had crashed regularly or intermittently have sometimes exceeded a continuous running time (uptime) of a thousand days (three years).
A: Does Linux ever support threads!
Linux supports fully preemptive threads for all programs and scripts that request it! The simple truth is, has better threading than Windows 95 or NT's threads, and Mac System and Windows don't have threads if they aren't managed by the program or a third party library.
A: The claim about Linux not supporting threading is simply false.
Modern versions of Linux provide fully POSIX kernel-based threads that are reentrant and schedule properly on SMP architectures.
And user-space threading is also available, which tends to be a bit faster, as it doesn't push any details into the kernel, but correspondingly establishes a cooperative multitasking scheme that allows a thread to monopolize the process...
On Windows NT, the heavyweight tasking infrastructure discourages doing multitasking via creating separate processes.
It instead proves necessary to implemenent the cooperative multitasking approach of using threading. Which has the entertaining effect of introducing new opportunities for deadlocks into processes...
On Linux, task-switching is nearly as quick as thread-switching on NT, which means that threading is needed less often.
A: There are two ways an operating system can be big. In hard drive space, and in memory. DOS is always going to be smaller than Linux. If you think DOS is the operating system of the future, enjoy its compact design. Windows on the other hand is terribly bloated. While Windows 95, and Linux take up similar amounts of hard drive space, Linux has much more packed in the disk space used. Installations designed for desktop users runs around 100 megs, with all the toys, gadgets, utilities, and development software. Internet servers the same. In memory, Windows 95 takes up obscene amounts of memory, enough to make a kernel programmer dissy. Although the Windows 95 box says 4 megs, the OS can't even fit itself in 4 megs, and gets swapped in and out, without any programs running. Linux on the other hand, with all its power, takes up about 1/4 of the memory Windows 95 does. Alas, NT takes up more memory than any operating system to date, and Mac System's is comparable to 3.1, which takes up about as much as Linux.
A: In the olden days, we thought that Common Lisp, requiring over 20MB of code for the compiler environment, and GNU Emacs, consuming maybe 8MB of memory and 20MB of code, were "huge, bloated" applications.
Now, a copy of Visual C++ probably eats 100MB of disk, and from what I hear, it is not recommended to run it without at least 64MB of RAM.
The contention that Linux uses "obscene" amounts of resources implies that there must be a term stronger than "obscene" to describe NT resource consumption...
A: The Linux Router Project boots and runs Linux usefully from a single floppy into two megabytes of RAM on an ordinary PC. Linux is also used in tiny embedded systems. Next question?
A: For Mac, it's AppleTalk. For Novell , it's IPX. For Windows, it's a mystery. For the Internet, it's TCP/IP. Linux supports them all. As you may know, TCP/IP (Internet Protocol) is the best networking protocol, and is native to Unix. It is also native to Linux. Networking Linux can be done in one weekend (assuming you do have network cards), with some reading, testing, and setting up. Connecting it to the Internet takes about 10 minutes. Networking always has some stigma to it, but Linux is certainly no worse than other operating systems.
A: The complexities of networking are neither caused by Linux nor are they all solved by Linux.
TCP/IP is a reasonably sophisticated set of protocols, permitting the construction of sophisticated and complex network configurations.
Linux can nicely support fairly sophisticated functions, including firewalling, IP masquerading, and, overall, the creation of network environments that will indeed be very complex.
MacOS and Windows 9x simply don't support this, and if you consider the class of network stuff that is complex to configure on Linux, it is likely that the equivalent Windows NT configuration will also be complex to configure.
In effect, the bad news is that complex network configurations are complex to manage.
At one time, the creation and configuration of network services on Linux required poring over HOWTOs and compiling, by hand, some set of kernel patches and networking utilities, and was challenging.
As with many other system facilities, useful properties are now available in default installed configurations, and distributions include tools to assist in configuring network facilities.
The last time I set up a CompuServe connection, the Debian utility pppconfig allowed me to get a PPP connection configured and working in a matter of about 5 minutes.
A: Because Linux is extremely flexible and adheres closely to published standards, especially in the networking arena, it is possible to combine two simple commands and establish an encrypted VPN (Virtual Private Network) between any two Unix machines, including Linux. This requires a purpose built set of applications on Windows, and is still not secure. Small, simple tools with big implications abound on Linux.
Because Linux distributions traditionally include many network tools, it is easy to test and debug a network involving a Linux machine. Most distributions of Linux have tools for operating as an "ambassador" between otherwise incompatible networking systems.
Generally, Unix-like systems have a reputation of being insecure. Linux compares very well to other Unix's, due to its open status. Another myth about open OS's in general is that they are insecure, which is based on the thought that its weaknesses are exposed in its source code. However, when its code is readily obtainable, more experts are likely to download it and report its bugs. On the other hand, with a closed system like Rhapsody or Windows NT, only hackers/crackers actually reverse engineer the code to exploit its security issues. Time has proven this theory: remember when the Netscape bug was discovered by a student? How likely is it that the bug would have been exposed if he weren't able to download the security-related source, and inspect it. There are a growing number of Linux-based ISPs and web servers, and very few hack incidents have happened on these. Research information regard security at cert.org.
A: Unix systems have a reputation of insecurity; this comes from several factors:
Many would-be hackers have had an understanding of Unix; it can be somewhat easier to attack a system that is widely understood than something like OS/400 where there are far fewer people who deeply understand the system.
Unix systems have a history of being used in environments such as university research where it was highly desirable to have relatively permissive system configurations.
As a result of being "permissive by default," it is incorrectly assumed that Unix systems cannot be made secure.
One can throw in with this the factor that security is not a "feature," but rather an "emergent property." You don't simply "add in" security; you have to build a secure system.
Common Linux distributions like Red Hat's activate a fair number of relatively permissive services; in order to secure the system, there's a fair bit of stuff that has to be turned off.
TPEP Certification work has never been attempted with the Windows 9x series, as the MS-DOS layer underneath is an "emergent property" that downright obliterates any possibility of making it secure in any useful way; in contrast, Windows NT gets a lot of press out of its C2 Certification.
But note that in order for Windows NT to be thus secured, it must be configured with a particular service pack level, C2 security patches, and configuration must be done as documented thus; this is nowhere similar to the default configuration, and may represent a configuration that would be found unusable by most users.
The main security tools, namely the GCOS permission fields attached to files, are not terribly well understood.
It is commonly asserted that the lack of ACLs on Linux (which isn't actually true; ACL efforts such as Rule Set Based Access Control (RSBAC) for Linux are progressing...) severely hampers one's abilities to secure it, and that it is desparately important to implement them.
To the contrary, it rather appears that the fine-grained capabilities that ACLs provide may not improve the security of a system, but rather, since this is a more complex and involved process, may diminish the administrator's ability to know whether or not the system truly is secure.
It must be acknowledged that these positions are somewhat controversial; what is not is that ACLs add complexity to the system. If you're going to make a system "more secure" via ACLs, then you'd better be prepared to allocate substantial resources towards the design and implementation of ACL configuration for your environment.
The availability of source code is, to some extent, a double-edged sword.
On the one hand, given implementation information, the "bad guys" are provided the ability to analyze the system for weaknesses, which is what some argue to be a bad thing.
On the other hand, the serious security folk that do things like design cryptographic standards have to make the assumption that they are defending data from attackers who know the algorithms.
And those that pretend to try to build secure systems that don't make that assumption run the risk of the "Netscape SSL Random Number Crack." At one point, the purportedly-secure Netscape SSL scheme was vulnerable due to a poorly implemented random number generator.
Ian Goldberg (we're both U(W) Math Alumni) ) didn't crack Netscape SSL as a result of having source code; he did so by analysis of the behaviour of the behaviour; see Netscape Security Problems for the details.
Note that the fact that source code is kept secret didn't improve security. On the contrary: there is no security through obscurity!
Netscape has never released crypto software sources, as the crypto software they used was proprietary code belonging to RSA Data Security.
The position that Unix is extremely secure, and more securable than Windows NT, is readily supportable.
If you examine the TPEP list of certified-to-be-secure systems, and all of the top levels of certification are filled with secured versions of Unix, and most of the certified products are Unix versions from one vendor or another. All of the products certified at level B2 or better are Unix variants.
Compare that to Windows NT, that is certified at the lowest grade rating that the NSA Trusted Computer System Evaluation Criteria is currently offering ratings in, namely C2.
Windows NT may be securable to a degree that compares to the lowest levels that Unix systems offer; in contrast, Windows 9x and Windows CE couldn't possibly be certified any higher than TPEP's Level D, Minimal Protection.
I would suggest that Linux isn't inherently wonderfully secure, but that it, like many other systems, is capable of being reasonably well-secured, if managed by a competent administrator who understands computer security. Without an administrator who understands security no system can ever be made secure.
A: Unix systems, including Linux, are much more powerful than most, and so more useful to a cracker who breaks into them. For this reason alone, they are much more subject to attack than most other forms of system. A car thief, faced with a $30,000 car which was twice as difficult to steal as a $1000 old klunker, would almost always attack the more expensive vehicle because he expects to get more out of it.
Software, and the development of it is great under Linux. The myths here are nearly as bad as the installation setup myths.
A: Most Linux distributions come with a huge collection of software, certainly more than you'd find in Windows or MacOS. Office softare, Linux does not come with much of. There's some to download, which compares very well to lousy Mac and Windows office software like Works (except with full justify. !). Just like on a Macintosh or Windows software, you can spend large amounts of money for commercial office suites. Anywhere RealTime, StarOffice and others with the features of MS Office, but tend to run cleaner and faster. Because for many users, non-copylefted software is offencive, Linux a vast collection of freeware programs.
Here are a few software resources...
Woven Goods for Linux Has a fair collection of Linux apps in a well catagorized list.
Linux Applications and Utilities Page is a great resource page, and has lots of great applications ready for download
TkDesk is simply glorious. This program combines the greatest features of Finder, Explorer w/ Windows 95, Norten Desktop, but on stariods. Preemptive thread stariods. Its amazing file manager, menu bar, and utilities make a Linux or Unix desktops adictive! [Freeware, GPL]
Window Programming Environment ok, did come with my distribution, but doesn't come with many others. It's a great programming editor for both X, and charecter mode. It features CUSTOM syntax highlighting, error message parcing, and seamless compiler shelling. Those who have used Borland's DOS interface found in Turbo for DOS will find its similarities uncanny! [Freeware, GPL]
Visual TCL is much like a Visual Basic IDE for TCL/TK. [Freeware, GPL]
A: In the years since 1997, some improved software has come along, and there are several "Office-like" systems:
KDE and KOffice components, also acquiring functionality and polish.
The most common argument against Linux at this point involves the contention that Microsoft Office is not available for Linux. These other options are not considered acceptable, not because they have any particular lacks in terms of functionality, but rather are found wanting simply because they are not Microsoft Office.
There is no particularly useful response to this argument, someone who requires Microsoft Office because it is Microsoft Office will be able to use the same (rather circular) argument against any software system that is not Microsoft Office, and arguing about this is a waste of time and breath.
A: A lot of the Unix software for Linux does have a learning curve. The other more modern Linux software is often for X-Windows (the GUI) and is very easy to use and learn. Older Unix software may take some time to learn, but after it is learned is more productive than Macintosh, and similar to Windows-level productivity. Newer Linux software shows respect for older software standards for fast usage, and combines those tools with modern styles to make software easy to learn.
A: The most critical difference to note is that Linux is not the same as MS-DOS, and the second most critical difference that Linux is not the same as Windows.
The system architectures are different, and thus if a user comes in with all sorts of assumptions that are based on a "MS-DOS-centric" view of the world, where "the way MS-DOS did it was right," and alternatives are not, then the fact that Linux is different will be unjustly perceived as inferiority.
A: Just like any other modern Unix, Linux supports Java applications with Kernel integration to the interpreter, compiles Java applications and applets, and has Java-enabled web browsers (such as Netscape.
Here is some information on Java and Linux:
Linux Java Tips and Hints Page Is mainly for developers, and has tips and tricks for programmers who use Java.
A: There are several JVMs (Java Virtual Machines) available for Linux, and increasing availability of good class libraries as well as IDEs such as IBM's IBM Developer Kit.
A: When you get Linux, you get tons of great compilers (including GCC G++). Most distributions include a program called Window Programming Environment (WPE), which provides a programming environment with custom syntax-highlighting, compiling, and everything else an IDE should have. The operating system also provides libraries that you must normally program yourself (including sound, graphics, and more). This myth is totally ungrounded, and is really pretty silly.
A: There are two views on this:
A Linux shell is, by itself, a sophisticated integrated development environment.
I subscribe to this view; I don't feel much need for the "crutches" of pull-down menus when it's simple enough to switch to a new virtual console and do little more than type make.
The other view is that semi-graphical IDEs are valuable. Happily, a number of IDEs of decent quality are coming available, between:
Linux usability has never been better. Never the less, this, myths constantly bombard the brilliant Linux user interfaces.
A: But there is one! There is -- X-Windows. Its drivers have been ported to Intel x86, and it's great! Although the interface isn't as standardized as Mac or Windows, I'd say it's still better. Some of the widgets are super, and it's a very fast interface. The myth that Linux has no GUI is made by those who are ignorant enough to beleive that an ISP's Unix shell is as far as Unix extends to.
People consistently decry X for doing precisely what it was designed to do: provide a mechanism to allow OTHERS to build GUI systems.
The X Window System is not a GUI; it is a protocol for drawing windows on the screen. It is designed to allow one to build a GUI on top of it; what's more, it is designed to allow multiple GUIs to co-exist at the same time.
It's fair to say that there has not been widespread agreement as to which GUI might become "dominant." The fact that Motif, the "defacto standard," for a long time, languished technically due, in no small part to a distinctly non-open-source license, was hurtful to the progression of GUIs on X.
In similar fashion, other technologies surrounding X have languished as a result of "proprietary warfare" that surrounded the transition of X activities from the X Consortium to The Open Group.
Only since 1999 has there been much in the way of new technical innovation with X; a lot of developers have joined The XFree86 Project, and once version 4 becomes generally available, this is likely to spur on considerable further developments.
People have built "GUIs that suck" atop X.
Gnome certainly is (serious competition to the Mac or Windows) ... I get a charge out of seeing the X Window System work the way we intended...
Q: Why is the command prompt so much worse than DOS's? There's no 4DOS, is there?
A: Linux, like Unix lets you choose your command prompt. There's Bash and Tcsh which are both clones of various Unix shells. A better statement may be Linux's command prompts are like DOS's on steroids. They support these redirection operators, scripts, and command prompt functions! If you don't like the power of these shell, you can use lsh, a shell that looks, acts, and feels like DOS! So, if you view power as bad, Linux is "worse."
A: This claim doesn't hold up terribly well.
About the only way it "works" is if one is using a shell that provides no ability to edit command lines, no ability to go back to the history of previous commands, and the likes.
That is a situation that one would commonly encounter on a real Unix, when using a real Bourne shell, or a totally-unconfigured Korn shell.
In contrast, on Linux, the lowest common denominator default happens to be Bash, which is a pretty powerful shell, with reasonable "creature comforts" such as command line editing and the ability to pull up commands from prior history, modify, and invoke them.
And this ignores the availability of shells that support writing powerful scripts, and really powerful globbing as provided by zsh. (I love zsh.)
DOS Command Language is a pale, pale shade of the power available in any of the shells available on Linux.
A: For Windows and Mac users, their ISPs' Unix shell is the full extent of Unix. The fact is, just like those text mode shells, Linux support graphical X-Windows shells for terminal machines. Unix machines have had this for about a decade, Linux has had it for years, and just now NT is getting in to it.
A: It does support "graphical networking." Graphical networking is inherent in Linux. Non-Unix systems require purpose-built programs to perform graphical networking. Linux uses the X Windowing System, which works without modification across networks, effectively doing the same job as Windows Terminal Server but without the per-seat licencing or a need for a huge central server. Many general-purpose tools, such as the SecureSHell communications system, can be used to improve graphical networking by encrypting and/or compressing the networked graphical information.
Most Linux disributions have graphical network configuration tools that work with Windows (SMB) networking as well as with industry standards like TCP/IP and NFS.
A: This question doesn't make much sense. I don't think this is a "commonly held myth."
A: Sometimes Mac and even more Windows users have a bad experience with an X Window System, and never seem to get over it. All you have to do to learn that user interface is a personal preference is to listen to a Mac vs. Windows spam-debate on Usenet. With X, most aspects of the interface are so configurable, the user can get his or her desktop to look and feel just about like anything, without opening any source files.
A: It's fair to say that the configuration of X desktops has had a history of "klunkiness."
I would direct attention back to Jim Getty's quote ; this is an area that has seen considerable development attention focused on it over the last couple of years. The results thus far are promising, with more improvements to come.
It's safe to say Linux is the most compatible operating system ever. The compatability myths are fueled by those who believe Windows is the only operating system that is compatible, and therefore by default Linux must not be compatible with other standards.
A: Linux was created on a PC running Minix, a smaller Unix clone. While it is most popular on the PC, the Linux kernel has been ported to Power Mac hardware, Sparc workstations, DEC machines, and more. In effect, it runs on everything from IBM System-390 and Fujitsu mainframes down to microcontrollers and even a wristwatch.
The number of platforms supported by Windows NT has been falling, with PPC and MIPS being discontinued; there is no question but that Linux supports a whopping lot more platforms.
A: Natively, Linux supports Minix, System V, A.out, and Elf exacutable formats. In beta now, Linux supports Java exatuables (J-code). Most Linux distributions come with DOSemu, a DOS emulator. Not to mention these fine Linux emulation programs... (Information taken from Linux Applications and Utitities)
Bochs a portable shareware X86 emulator for X Windows systems
BSVC a microprocessor simulation framework (Motorola 68000 & HECTOR 1600)
DOSEMU the Linux DOS emulator
Euphoric an Oric emulator/simulator for Linux
Executor a commercial Macintosh emulator by ARDI
Frodo The free portable C64 emulator for BeOS/Unix/MacOS/AmigaOS
Snes96 a Super Nintendo Entertainment System emulator
Stella - an Atari 2600 emulator
STonX an Atari ST emulator for X11
U.A.E. a Unix Amiga emulator
Virtual 2600 an Atari 2600 (!) emulator
Virtual GameBoy: Portable GameBoy Emulator - Linux emulator of Nintendo game machine
WABI 2.2 for Linux run Windows 3.1 Applications on Linux-based workstations
WINE an alpha level Windows emulator
XZX a Sinclair ZX Spectrum 48/128/+3 emulator for Unix/X11
xz80 a Sinclair ZX Spectrum emulator
A: Again, the claim that Linux can't emulate other platforms has long been an obvious false claim.
A: This is a complete fallacy. Not only is Linux friendly to other operating systems on the same drive (not messing up their partitions, ect), it uses their file systems, and includes utilities to help have more than one OS. Linux's LiLo will load Linux, DOS/Win95, OS/2, and more. Its file mapping and mounting allows you to use other file systems, such as DOS's FAT-16 (with Windows 95 long filenames), OS/2's file system, Minix's and others. Even you don't have other operating systems, Linux has emulators to let you run programs that aren't even made for Linux.
A: This is another "myth" that does not come up terribly often, and comes more often as a troll than it does as a real question.
A: What file formats are and aren't supported is really up the applications. Linux applications support as many file formats, if not more than other platforms. When a programmer is going to create an application he or she will have to decide what file formats to support. In the Linux free software community, there is a wealth of shared and static libraries that the programmer can use to support many file formats. Windows and Mac libraries programmers usually have to pay obscene amounts of money for, and are less likely to buy. Also, since many Linux programs are truely free, and come with their source code, other users add file formats to existing applications. For example, look at what the text editor Emacs supports extra toys for!
A: This is a rather thorny matter.
Firstly, it must be pointed out that MS Office data formats cannot be treated as "standards" under any reasonable definition of the word "standard."
Microsoft has worked hard over the last few years to continually release new variations on the data formats for MS Office applications seemingly so as to make it as difficult as possible to allow outsiders to interoperate with their formats.
Inasmuch as Microsoft has not detailed the content of any Office document format, only the general structure of some Office documents, the contents of individual Office documents cannot be said to be "standard" or "non-standard". As Windows networking is in practice defined by "what Windows does on the wire", the standard for Word documents is often defined by "what Word produces".
Microsoft's knowledge of their own "standards" is so poor that when it came time to write a book on the SMB networking "standard", Microsoft had to rely on information from the Samba project. They simply did not know themselves.
Here are a set of questions presented as part of Linux for Business conference in London. They do indeed seem representative of common "FUD" questions presented by vendors that oppose the use of Free Software in favor of their more expensive alternatives.