Announcement

Collapse
No announcement yet.

Web Hosting: Linux or Windows?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by aneeshm
    There is a reason you get paid to do comand-line work - it is more powerful and productive.
    Well said. Shell scripts + cron makes administration so much easier.

    Besides, when the GUI goes kaput, we can always fall back on a shell. The same thing can't be said about Windows.
    (\__/) 07/07/1937 - Never forget
    (='.'=) "Claims demand evidence; extraordinary claims demand extraordinary evidence." -- Carl Sagan
    (")_(") "Starting the fire from within."

    Comment


    • #17
      Originally posted by aneeshm
      And Asher , you must have realised by now that "GUIs make easy jobs easier , and difficult jobs impossible" . I've been a victim of this truism . There is a reason you get paid to do comand-line work - it is more powerful and productive .
      GUIs are more powerful for some things, Command-line for others.

      Linux relies too heavily on command-line or obscure config files when a GUI suffices -- I'm thinking Apache configuration vs IIS configuration, for example.

      The reason I get paid to do command-line work isn't because it's more powerful and productive, but because it's needlessly complicated with terse commands and ambiguous outputs.
      "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
      Ben Kenobi: "That means I'm doing something right. "

      Comment


      • #18
        Originally posted by Urban Ranger
        Besides, when the GUI goes kaput, we can always fall back on a shell. The same thing can't be said about Windows.
        What exactly do you mean? If the Windows GUI crashes, it doesn't take down the operating system.

        More FUD from you.
        "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
        Ben Kenobi: "That means I'm doing something right. "

        Comment


        • #19
          @ UR

          Dam' true . There have been situations where , in Windows , I would have resigned myself to my fate and let my unsaved work go . Now I simply kill the offending process and get on with my work .

          Another thing - after using bash + gcc + vi , using the Windows/DOS command-line + turboc++ at school feels like I'm in some nightmare and I've gone back to the dark ages of computing .

          Can you believe that , thanks to the "ease of administration" of Windows , the teacher has set up one user for an entire grade/class , and the whole batch of CS students use that - one account with a home directory where everyone has their own subdirectory , and everyone has full write permission . Anyone could delete everyone's work in one keystroke - all the work everyone did for the entire year .

          And again , at home I run a 2.0 GHz Athlon 64 3200+ with two 512 MB DDR sticks , dual channel , with Ubuntu 5.04 , while the school has a 2.4 GHz P4 running Windows . Again , in terms of responsiveness , there is no comparison , in spite of the school's processors having a higher clock speed ( I know , AMD is better in other areas , and that I have eight times the core memory the school systems have ) .

          Comment


          • #20
            Originally posted by aneeshm
            @ UR

            Dam' true . There have been situations where , in Windows , I would have resigned myself to my fate and let my unsaved work go . Now I simply kill the offending process and get on with my work .

            Another thing - after using bash + gcc + vi , using the Windows/DOS command-line + turboc++ at school feels like I'm in some nightmare and I've gone back to the dark ages of computing .
            Wait, using literally antique software (bash/vi/gcc), you feel you're going back to the dark ages?

            And the whole point of Windows is that you shouldn't be using DOS.

            Compare Visual Studio 2005 with bash + gcc + vi, then see what it's like to go back to the dark ages.

            Can you believe that , thanks to the "ease of administration" of Windows , the teacher has set up one user for an entire grade/class , and the whole batch of CS students use that - one account with a home directory where everyone has their own subdirectory , and everyone has full write permission . Anyone could delete everyone's work in one keystroke - all the work everyone did for the entire year .
            That has absolutely nothing to do with the ease of use of Windows and everything to do with the stupidity of your admin, and perhaps the people who use that as some kind of argument against Windows...

            And again , at home I run a 2.0 GHz Athlon 64 3200+ with two 512 MB DDR sticks , dual channel , with Ubuntu 5.04 , while the school has a 2.4 GHz P4 running Windows . Again , in terms of responsiveness , there is no comparison , in spite of the school's processors having a higher clock speed ( I know , AMD is better in other areas , and that I have eight times the core memory the school systems have ) .
            I agree, the Windows GUI is far more responsive than XWindows.
            "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
            Ben Kenobi: "That means I'm doing something right. "

            Comment


            • #21
              @ Asher

              I think he was referring not to the failure of the GUI per se , but a system hang which stalls the GUI . A hang , in other words . And how , pray , does a normal user , without inside knowledge , access the command-line part ? In Linux , you Ctrl-Alt-F1 , login , find and kill the process causing the problem , and you're done . How do you recover from a hang of the GUI in Windows ?

              Modularisation is a good thing - having unnecessary things in the kernel , like Windows does , is not . Having the kernel , shell ( BASH , sh , ksh , csh ) , window system ( X ) , and window managers ( KDE , GNOME , XFCE , etc ) separate is , IMO , a very good thing . One going ( excpt the kernel ) does not necessarily mean the failure of the others .

              As and example , I present to you this little tale :


              The legendary UNIX rescue

              Have you ever left your terminal logged in, only to find when you came
              back to it that a (supposed) friend had typed "rm -rf ~/*" and was
              hovering over the keyboard with threats along the lines of "lend me a
              fiver 'til Thursday, or I hit return"? Undoubtedly the person in
              question would not have had the nerve to inflict such a trauma upon
              you, and was doing it in jest. So you've probably never experienced the
              worst of such disasters....

              It was a quiet Wednesday afternoon. Wednesday, 1st October, 15:15
              BST, to be precise, when Peter, an office-mate of mine, leaned away
              from his terminal and said to me, "Mario, I'm having a little trouble
              sending mail." Knowing that msg was capable of confusing even the
              most capable of people, I sauntered over to his terminal to see what
              was wrong. A strange error message of the form (I forget the exact
              details) "cannot access /foo/bar for userid 147" had been issued by
              msg. My first thought was "Who's userid 147?; the sender of the
              message, the destination, or what?" So I leant over to another
              terminal, already logged in, and typed

              grep 147 /etc/passwd
              only to receive the response
              /etc/passwd: No such file or directory.

              Instantly, I guessed that something was amiss. This was confirmed
              when in response to

              ls /etc
              I got
              ls: not found.

              I suggested to Peter that it would be a good idea not to try anything
              for a while, and went off to find our system manager.
              When I arrived at his office, his door was ajar, and within ten
              seconds I realised what the problem was. James, our manager, was
              sat down, head in hands, hands between knees, as one whose world has
              just come to an end. Our newly-appointed system programmer, Neil, was
              beside him, gazing listlessly at the screen of his terminal. And at
              the top of the screen I spied the following lines:

              # cd
              # rm -rf *

              Oh, ****, I thought. That would just about explain it.
              I can't remember what happened in the succeeding minutes; my memory is
              just a blur. I do remember trying ls (again), ps, who and maybe a few
              other commands beside, all to no avail. The next thing I remember was
              being at my terminal again (a multi-window graphics terminal), and
              typing

              cd /
              echo *

              I owe a debt of thanks to David Korn for making echo a built-in of his
              shell; needless to say, /bin, together with /bin/echo, had been
              deleted. What transpired in the next few minutes was that /dev, /etc
              and /lib had also gone in their entirety; fortunately Neil had
              interrupted rm while it was somewhere down below /news, and /tmp, /usr
              and /users were all untouched.

              Meanwhile James had made for our tape cupboard and had retrieved what
              claimed to be a dump tape of the root filesystem, taken four weeks
              earlier. The pressing question was, "How do we recover the contents
              of the tape?". Not only had we lost /etc/restore, but all of the
              device entries for the tape deck had vanished. And where does mknod
              live? You guessed it, /etc. How about recovery across Ethernet of
              any of this from another VAX? Well, /bin/tar had gone, and
              thoughtfully the Berkeley people had put rcp in /bin in the 4.3
              distribution. What's more, none of the Ether stuff wanted to know
              without /etc/hosts at least. We found a version of cpio in
              /usr/local, but that was unlikely to do us any good without a tape
              deck.

              Alternatively, we could get the boot tape out and rebuild the root
              filesystem, but neither James nor Neil had done that before, and we
              weren't sure that the first thing to happen would be that the whole
              disk would be re-formatted, losing all our user files. (We take dumps
              of the user files every Thursday; by Murphy's Law this had to happen
              on a Wednesday). Another solution might be to borrow a disk from
              another VAX, boot off that, and tidy up later, but that would have
              entailed calling the DEC engineer out, at the very least. We had a
              number of users in the final throes of writing up PhD theses and the
              loss of a maybe a weeks' work (not to mention the machine down time)
              was unthinkable.

              So, what to do? The next idea was to write a program to make a device
              descriptor for the tape deck, but we all know where cc, as and ld
              live. Or maybe make skeletal entries for /etc/passwd, /etc/hosts and
              so on, so that /usr/bin/ftp would work. By sheer luck, I had a
              gnuemacs still running in one of my windows, which we could use to
              create passwd, etc., but the first step was to create a directory to
              put them in. Of course /bin/mkdir had gone, and so had /bin/mv, so we
              couldn't rename /tmp to /etc. However, this looked like a reasonable
              line of attack.

              By now we had been joined by Alasdair, our resident UNIX guru, and as
              luck would have it, someone who knows VAX assembler. So our plan
              became this: write a program in assembler which would either rename
              /tmp to /etc, or make /etc, assemble it on another VAX, uuencode it,
              type in the uuencoded file using my gnu, uudecode it (some bright
              spark had thought to put uudecode in /usr/bin), run it, and hey
              presto, it would all be plain sailing from there. By yet another
              miracle of good fortune, the terminal from which the damage had been
              done was still su'd to root (su is in /bin, remember?), so at least we
              stood a chance of all this working.

              Off we set on our merry way, and within only an hour we had managed to
              concoct the dozen or so lines of assembler to create /etc. The
              stripped binary was only 76 bytes long, so we converted it to hex
              (slightly more readable than the output of uuencode), and typed it in
              using my editor. If any of you ever have the same problem, here's the
              hex for future reference:

              070100002c0000000000000000000000000000000000000000 00000000000000
              0000dd8fff010000dd8f27000000fb02ef07000000fb01ef07 0000000000bc8f
              8800040000bc012f65746300

              I had a handy program around (doesn't everybody?) for converting ASCII
              hex to binary, and the output of /usr/bin/sum tallied with our
              original binary. But hang on---how do you set execute permission
              without /bin/chmod? A few seconds thought (which as usual, lasted a
              couple of minutes) suggested that we write the binary on top of an
              already existing binary, owned by me...problem solved.
              So along we trotted to the terminal with the root login, carefully
              remembered to set the umask to 0 (so that I could create files in it
              using my gnu), and ran the binary. So now we had a /etc, writable by
              all. From there it was but a few easy steps to creating passwd,
              hosts, services, protocols, (etc), and then ftp was willing to play
              ball. Then we recovered the contents of /bin across the ether (it's
              amazing how much you come to miss ls after just a few, short hours),
              and selected files from /etc. The key file was /etc/rrestore, with
              which we recovered /dev from the dump tape, and the rest is history.
              Now, you're asking yourself (as I am), what's the moral of this story?
              Well, for one thing, you must always remember the immortal words,
              DON'T PANIC. Our initial reaction was to reboot the machine and try
              everything as single user, but it's unlikely it would have come up
              without /etc/init and /bin/sh. Rational thought saved us from this
              one.

              The next thing to remember is that UNIX tools really can be put to
              unusual purposes. Even without my gnuemacs, we could have survived by
              using, say, /usr/bin/grep as a substitute for /bin/cat.
              And the final thing is, it's amazing how much of the system you can
              delete without it falling apart completely. Apart from the fact that
              nobody could login (/bin/login?), and most of the useful commands
              had gone, everything else seemed normal. Of course, some things can't
              stand life without say /etc/termcap, or /dev/kmem, or /etc/utmp, but
              by and large it all hangs together.

              I shall leave you with this question: if you were placed in the same
              situation, and had the presence of mind that always comes with
              hindsight, could you have got out of it in a simpler or easier way?
              Answers on a postage stamp to:
              Mario Wolczko

              Comment


              • #22
                Originally posted by aneeshm
                @ Asher

                I think he was referring not to the failure of the GUI per se , but a system hang which stalls the GUI . A hang , in other words . And how , pray , does a normal user , without inside knowledge , access the command-line part ? In Linux , you Ctrl-Alt-F1 , login , find and kill the process causing the problem , and you're done . How do you recover from a hang of the GUI in Windows ?
                CTRL-ALT-DEL, find and kill the process causing the problem, and you're done...

                Modularisation is a good thing - having unnecessary things in the kernel , like Windows does , is not . Having the kernel , shell ( BASH , sh , ksh , csh ) , window system ( X ) , and window managers ( KDE , GNOME , XFCE , etc ) separate is , IMO , a very good thing . One going ( excpt the kernel ) does not necessarily mean the failure of the others .
                You shot yourself in the foot with this argument.

                Linux is a monolithic kernel, which by definition means it has far more things in the kernel. The Windows NT kernel is more of a microkernel/monolithic hybrid.

                Windows doesn't have the GUI (Windows Explorer) running in the kernel like you seem to think it does. Windows is far more modular than Linux. All of the above (kernel, shell (DOS), window system/manager [these SHOULD be integrated for lower latencies and responsiveness]) are separate on Windows as well.

                If Linux were truly modular you wouldn't need to recompile the kernel to change so much...

                BTW, your example story goes a long way to show just how terrible the user-friendliness of Linux really is.
                "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                Ben Kenobi: "That means I'm doing something right. "

                Comment


                • #23
                  Originally posted by Asher

                  Wait, using literally antique software (bash/vi/gcc), you feel you're going back to the dark ages?

                  And the whole point of Windows is that you shouldn't be using DOS.

                  Compare Visual Studio 2005 with bash + gcc + vi, then see what it's like to go back to the dark ages.
                  How much did Visual Studio 2005 cost ? A school can't afford the latest and greatest new thing from MS just to stay on the upgrade treadmill . TC++ works , and that is why the admin is reluctant to upgrade it . We're using DOS because that's the thing that works with TC++ .

                  I'm planning to convince the teacher to try out a live-cd based telnet server to try out the environment I suggested .


                  That has absolutely nothing to do with the ease of use of Windows and everything to do with the stupidity of your admin, and perhaps the people who use that as some kind of argument against Windows...
                  The admin is not stupid , just an overworked CS teacher ( she has to manage the entire school network single-handedly , and also teach CS slasses ) . She doesn't have the time to configure the thing properly . It would be easier to set it up with a shell script in Linux , only once at the beginning of the year .


                  I agree, the Windows GUI is far more responsive than XWindows.
                  You've wilfully misunderstood me . In my experience , the Windows on the same machine runs more slowly , less responsively .









                  I know we'll never agree , so I recommend we stay on topic and actually try to help the thread starter . If , however , you wish to continue this , I don't mind .

                  Comment


                  • #24
                    Originally posted by aneeshm
                    How much did Visual Studio 2005 cost ? A school can't afford the latest and greatest new thing from MS just to stay on the upgrade treadmill . TC++ works , and that is why the admin is reluctant to upgrade it . We're using DOS because that's the thing that works with TC++ .

                    I'm planning to convince the teacher to try out a live-cd based telnet server to try out the environment I suggested .
                    The cost, at least for my school, was extremely low to the school and free to the students.

                    It's part of the MSDN Academic Alliance: http://msdn.microsoft.com/academic/

                    Very, very, very rarely will you find a professional developer using toys like vi & gcc for real work. gcc is a terrible compiler and vi was obsoleted long ago.

                    The admin is not stupid , just an overworked CS teacher ( she has to manage the entire school network single-handedly , and also teach CS slasses ) . She doesn't have the time to configure the thing properly . It would be easier to set it up with a shell script in Linux , only once at the beginning of the year .
                    Nonsense, she could accomplish the same thing in a few clicks (or a script, or a macro) in Windows as well. The fact that she doesn't know that she can tells me she's not the brightest.

                    In fact, the use of Access Control Lists in Windows makes management that much easier.

                    You've wilfully misunderstood me . In my experience , the Windows on the same machine runs more slowly , less responsively .
                    I very much disagree. So do benchmarks and many other anecodtal evidence examples.

                    Just think about it, XWindows was designed as a network user interface...it's an additional layer that doesn't exist on Windows, which allows Windows' UI to be more responsive.
                    "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                    Ben Kenobi: "That means I'm doing something right. "

                    Comment


                    • #25
                      Originally posted by Asher

                      CTRL-ALT-DEL, find and kill the process causing the problem, and you're done...
                      Misunderstood me again ! I said that the whole thing had hung - the task manager included - isn't it a part of the GUI ? The mouse refuses to work , typing on the keyboard does nothing , and CTRL-ALT-DEL has either no effect or the task manager itself is hung . What do you do ?

                      Originally posted by Asher

                      You shot yourself in the foot with this argument.

                      Linux is a monolithic kernel, which by definition means it has far more things in the kernel. The Windows NT kernel is more of a microkernel/monolithic hybrid.
                      Linux is a monolithic kernel , but modules can be inserted and reomved while it's running . How many drivers require you to restart Windows ? Almost all of them . How many require to restart the kernel in Linux ? Very few , if any , and none that I have come across . Even changing the graphics card driver requires you to restart only one process - the X server .

                      Originally posted by Asher

                      Windows doesn't have the GUI (Windows Explorer) running in the kernel like you seem to think it does. Windows is far more modular than Linux. All of the above (kernel, shell (DOS), window system/manager [these SHOULD be integrated for lower latencies and responsiveness]) are separate on Windows as well.
                      Some GUI functions are integrated with the kernel , as you well know . And DOS is a terrible shell , compared to the GNU/Linux standard , BASH .

                      The probelm with integrating the window system and manager is that the closer they grow together , the harder it is to separate them and create an alternative to any one . In Linux , you can have XFCE running on X running on ksh running on 2.4 , or you can have KDE on X on bash on 2.6.12 . You have choice . You can customise . And most importantly , a bug in one does not automatically become a bug in another ( while moving down the chain ) .

                      Originally posted by Asher

                      If Linux were truly modular you wouldn't need to recompile the kernel to change so much...
                      When you change kernel-level options , you recompile the kernel . In Windows , you can't change many things hardcoded into the kernel space . How much can you customise the Windows kernel ? Can you have it just the way you want it ? Can you play around with it ? Can you do risky things with it if you want to and know what you're doing , just to see what happens ?

                      Originally posted by Asher

                      BTW, your example story goes a long way to show just how terrible the user-friendliness of Linux really is.
                      Hmm . . . .

                      I'd like to see you rescue a Windows with almost all of the system directories blown away . The system on that UNIX box survived so long because the kernel was monolithic , and thus could reep running with most of the important system directories gone , and almost all userspace programs gone .

                      The fact is , you actually can rescue a Linux system in such dire straits , but you can't do the same with a Windows system .

                      And not only that , but the example is one of UNIX , not Linux .

                      Comment


                      • #26
                        Originally posted by aneeshm
                        Misunderstood me again ! I said that the whole thing had hung - the task manager included - isn't it a part of the GUI ? The mouse refuses to work , typing on the keyboard does nothing , and CTRL-ALT-DEL has either no effect or the task manager itself is hung . What do you do ?
                        Actually, the Task Manager GUI is seperate from the Windows Environment (Explorer.exe). If the entire interface freezes, ie Explorer.exe freezes, on Win2K/XP you can kill explorer.exe and re-launch it without issue.

                        The only time I've ever had the mouse freeze (which isn't part of the GUI, by the way) is during a system freeze/XWindows crash on Linux.

                        Linux is a monolithic kernel , but modules can be inserted and reomved while it's running . How many drivers require you to restart Windows ? Almost all of them . How many require to restart the kernel in Linux ? Very few , if any , and none that I have come across . Even changing the graphics card driver requires you to restart only one process - the X server .
                        You're confusing a lot of things here, requiring a reboot has nothing to do with monolithic or even the kernel. In Windows (XP), when a device is in use, the kernel is locked. It can't be changed. To change it, that device needs to be disabled and re-enabled with the new driver, and in Windows XP this is done by rebooting. Windows 2003 requires far less reboots, and Longhorn will reduce that as well.

                        But that's completely unrelated to monolithic vs microkernel, or about storing things in kernel space...basically it has nothing to do with what we were discussing.

                        Some GUI functions are integrated with the kernel , as you well know . And DOS is a terrible shell , compared to the GNU/Linux standard , BASH .
                        But Monad is beter than BASH, so I don't see your point.

                        You can also install Cygwin and use BASH on Windows.

                        The probelm with integrating the window system and manager is that the closer they grow together , the harder it is to separate them and create an alternative to any one . In Linux , you can have XFCE running on X running on ksh running on 2.4 , or you can have KDE on X on bash on 2.6.12 . You have choice . You can customise . And most importantly , a bug in one does not automatically become a bug in another ( while moving down the chain ) .
                        This is utter nonsense. The Linux design is absolutely horrible for a graphical interface, and it's one of the main failures of Linux on the desktop. It's overly complicated and is absolutely far buggier than MacOS X or Windows, which use a consolidated design.

                        The Linux GUI system is a hack, I cannot believe you're actually passing it off as a great design. You have multiple thread/context switches just do so simple things due to all of the layers of complexity which you think is a good thing...

                        The window system and manager aren't integrated on Windows either. GDI+ is the "Window System", Explorer is the "Window Manager". There are Window Manager replacements out there, if you're crazy enough to use one.

                        When you change kernel-level options , you recompile the kernel . In Windows , you can't change many things hardcoded into the kernel space . How much can you customise the Windows kernel ? Can you have it just the way you want it ? Can you play around with it ? Can you do risky things with it if you want to and know what you're doing , just to see what happens ?
                        This is, again, a completely different topic. Linux, by design, requires far more recompiles to configure than the Windows NT kernel does.

                        Can you give me an example of a kernel recompile that would be beneficial to me? What would you like to recompile the Windows kernel to do, in other words?

                        I'd like to see you rescue a Windows with almost all of the system directories blown away . The system on that UNIX box survived so long because the kernel was monolithic , and thus could reep running with most of the important system directories gone , and almost all userspace programs gone .
                        Holy hell, my mind is reeling.

                        Windows doesn't even LET you delete critical system files that are in use. That's another one of those little things that Windows has, common sense...

                        And monolithicness had nothing to do with that. The utilities were loaded into RAM and so they didn't need the files, which were removed.

                        You're saying a bunch of nonsensical things and drawing conclusions related to even further nonsensical things and trying to say it constitutes an argument?

                        The fact is , you actually can rescue a Linux system in such dire straits , but you can't do the same with a Windows system .
                        That is absolutely not a fact, and is an incredibly silly statement to make.

                        You give an example of some idiot exploiting Linux/Unix's retarded "do whatever the user says" mentality to delete the ENTIRE filesystem. Then the fact that the files they needed to undo these were already loaded into RAM is somehow a Linux/Unix-exclusive concept...Windows clearly doesn't load system binaries into RAM either!
                        "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                        Ben Kenobi: "That means I'm doing something right. "

                        Comment


                        • #27
                          It seems the problem here is lack of education in the people pushing Linux again.

                          It's a familiar trend. The funniest part is I find Linux advocates the cockiest of them all.

                          On Windows, the keyboard/mouse/input drivers are not included in GDI+ (Window Server/Host/whatever you want to call it).

                          There are 3 separate entities: I/O, GDI+, Explorer.

                          I've never heard of GDI+ crashing, but Explorer has been known to crash.

                          In either case, since the I/O runs separately, you can use CTRL-ALT-DEL to launch a task manager, regardless of the state of GDI+ and Explorer. If GDI+ is not running (or is not responding), it falls back to its own basic graphical drawing code and will still allow you to kill or launch new processes.

                          You can even run "cmd.exe" and use a command-line shell.

                          It's FUD from the Linux camp that "if the GUI goes down, the whole system goes down". And in my experience, the Linux GUI goes down a helluva lot more than Windows' does...
                          Last edited by Asher; July 8, 2005, 14:46.
                          "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                          Ben Kenobi: "That means I'm doing something right. "

                          Comment


                          • #28
                            I have had cases where ctrl-alt-delete refused to come up.

                            Comment


                            • #29
                              Asher, Windows isn't necessarily more responsive.

                              On my Linux box, which is a P3-933 with 192M RAM, Windows XP is slow as ****, 2000 isn't that much faster, and 98 also takes forever to load.

                              On the other hand, Gentoo with kernel 2.6.11, xorg, and fluxbox is beautifully responsive.

                              ===

                              In any case, if you're buying server space, Felch, go with what's cheapest, because honestly, you'll be doing administration through web applications they give you and ftp/sftp connections.
                              B♭3

                              Comment


                              • #30
                                Originally posted by Kuciwalker
                                I have had cases where ctrl-alt-delete refused to come up.
                                Those are likely system freezes from bad drivers. Check the system event log when it occurs to see who the culprit was when it occurs.
                                "The issue is there are still many people out there that use religion as a crutch for bigotry and hate. Like Ben."
                                Ben Kenobi: "That means I'm doing something right. "

                                Comment

                                Working...
                                X