Unix for Busy People - Introduction: Difference between revisions

From Public wiki of Kevin P. Inscoe
Jump to navigation Jump to search
mNo edit summary
 
(2 intermediate revisions by the same user not shown)
Line 55: Line 55:
"One of the questions that comes up all the time is: How
"One of the questions that comes up all the time is: How
enthusiastic is our support for UNIX?
enthusiastic is our support for UNIX?
Unix was written on our machines and for our machines many
Unix was written on our machines and for our machines many
years ago.  Today, much of UNIX being done is done on our machines.
years ago.  Today, much of UNIX being done is done on our machines.
Line 63: Line 64:
its popularity in these markets, we support it.  We have good UNIX on
its popularity in these markets, we support it.  We have good UNIX on
VAX and good UNIX on PDP-11s.
VAX and good UNIX on PDP-11s.
It is our belief, however, that serious professional users will
It is our belief, however, that serious professional users will
run out of things they can do with UNIX. They'll want a real system and
run out of things they can do with UNIX. They'll want a real system and
Line 72: Line 74:
difference - the beauty of UNIX is it's simple; and the beauty of VMS
difference - the beauty of UNIX is it's simple; and the beauty of VMS
is that it's all there."
is that it's all there."
- Ken Olsen, President of DEC, 1984
- Ken Olsen, President of DEC, 1984
</pre>
</pre>


!!!A very brief history of Unix:
==A very brief history of Unix==


In the beginning there were two Unices (is that a word?): AT&T and Berkeley (BSD).
In the beginning there were two Unices (is that a word?):  
 
AT&T and Berkeley (BSD).


http://en.wikipedia.org/wiki/UNIX
http://en.wikipedia.org/wiki/UNIX
Line 83: Line 88:
http://en.wikipedia.org/wiki/Berkeley_Software_Distribution
http://en.wikipedia.org/wiki/Berkeley_Software_Distribution


While AT&T Bell Labs in New Jersey was working on their Unix, Bill Joy a graduate student at UC Berkeley liked what was going on and brought a tape of the source code from System 6 being built at AT&T. It became known as BSD version 1. He added a Pascal compiler and his own creation the Ex
While AT&T Bell Labs in New Jersey was working on their Unix, Bill Joy a graduate student at UC Berkeley liked what was going on and brought a tape of the source code from System 6 being built at AT&T. It became known as BSD version 1. He added a Pascal compiler and his own creation the Ex editor.  
editor. in 1978 UCB received a VAX and BSD had to be modified to work with virtual memory
 
and 32 bit. This became BSD version 3 and the first ever "fork" took place forever splitting
in 1978 UCB received a VAX and BSD had to be modified to work with virtual memory and 32 bit. This became BSD version 3 and the first ever "fork" took place forever splitting
AT&T and BSD apart. The differences between the two are mainly in memory and network management.
AT&T and BSD apart. The differences between the two are mainly in memory and network management.
Development os BSD caused UCB to win contracts from DARPA and the Department of Defense which pole-vaulted BSD into the limelight. Early vendors (particularly defense contractors) Sun Microsystems, Apollo and Digital Equipment adopted BSD while more business oriented vendors like IBM, Santa Cruz Operation and NCR (and later Sun) chose AT&T's System V derivation now known as Unix System Laboratories. This caused some consternation for developers and software vendors and in 1986 the IEEE started a project in an attempt to unify the two branches of unix called Portable Operating System Interface [for Unix] or POSIX. At the same time a group known as The Open Standards Group or Open Group for short also had similar goals but for all operating systems including Digital Equipment's VMS which started the craze of renaming operating system with the word "open" and VMS became OpenVMS. Almost all vendors in existence signed to become a member of The Open Group which quickly adopted the POSIX's groups recommendations. in 1983 IBM released the IBM PC AT and the Intel x86 architecture began to take off as a dominant architecture. The IBM PC AT Intel x386 chip finally supported memory page protection long considered vital to a true multi-user system. This lead to a number of variants of BSD such as 386BSD being optimized for the 386 chip-set. It was at this time however that legal troubles began for BSD since portions of BSD still contained AT&T licensed code. BSD now known as BSDi was blocked by a judge from allowing any more copies to be distributed. It was during this time a Finish computer science student Linux Torvalds was wanting to experiment with a micro operating system that could be run from the IBM 386 and 486 PC's but because of the legal trouble BSDi was having could not use that code base and instead found a fellow computer scientist Andrew S. Tanenbaum started work on his own Unix-like operating system from scratch called Minix. Linux took the Minix operating system and significantly modified it to produce what is now the Linux Kernel. In April 1991 he posted the code and community for formed around this new kernel. To prevent software from being used on their competitors' computers, most manufacturers stopped distributing source code and began using copyright and restrictive software licenses to limit or prohibit copying and redistribution. Around 1980 an MIT programmer name Richard Stallman dissatisfied with this turn of events was one day denied access to the source code for a printer and this lead to him forming the Free Software Foundation and in 1984 quit MIT to create the GNU Project. The founding goal of the project was, in the words of its initial announcement, to develop "a sufficient body of free software [...] to get along without any software that is not free." To make this happen, the GNU Project began working on an operating system called GNU. GNU is a recursive acronym that stands for "GNU's Not Unix". To this end they began to rewrite almost all of the standard Unix commands in the new FSF license referred to as the GNU Public License or GPL.
 
Development of BSD caused UCB to win contracts from DARPA and the Department of Defense which pole-vaulted BSD into the limelight. Early vendors (particularly defense contractors) Sun Microsystems, Apollo and Digital Equipment adopted BSD while more business oriented vendors like IBM, Santa Cruz Operation and NCR (and later Sun) chose AT&T's System V derivation now known as Unix System Laboratories. This caused some consternation for developers and software vendors and in 1986 the IEEE started a project in an attempt to unify the two branches of unix called Portable Operating System Interface [for Unix] or POSIX.  
 
At the same time a group known as The Open Standards Group or Open Group for short also had similar goals but for all operating systems including Digital Equipment's VMS which started the craze of renaming operating system with the word "open" and VMS became OpenVMS. Almost all vendors in existence signed to become a member of The Open Group which quickly adopted the POSIX's groups recommendations. in 1983 IBM released the IBM PC AT and the Intel x86 architecture began to take off as a dominant architecture. The IBM PC AT Intel x386 chip finally supported memory page protection long considered vital to a true multi-user system. This lead to a number of variants of BSD such as 386BSD being optimized for the 386 chip-set. It was at this time however that legal troubles began for BSD since portions of BSD still contained AT&T licensed code. BSD now known as BSDi was blocked by a judge from allowing any more copies to be distributed.  
 
It was during this time a Finish computer science student Linux Torvalds was wanting to experiment with a micro operating system that could be run from the IBM 386 and 486 PC's but because of the legal trouble BSDi was having could not use that code base and instead found a fellow computer scientist Andrew S. Tanenbaum started work on his own Unix-like operating system from scratch called Minix. Linux took the Minix operating system and significantly modified it to produce what is now the Linux Kernel. In April 1991 he posted the code and community for formed around this new kernel.  
 
To prevent software from being used on their competitors' computers, most manufacturers stopped distributing source code and began using copyright and restrictive software licenses to limit or prohibit copying and redistribution. Around 1980 an MIT programmer name Richard Stallman dissatisfied with this turn of events was one day denied access to the source code for a printer and this lead to him forming the Free Software Foundation and in 1984 quit MIT to create the GNU Project. The founding goal of the project was, in the words of its initial announcement, to develop "a sufficient body of free software [...] to get along without any software that is not free." To make this happen, the GNU Project began working on an operating system called GNU. GNU is a recursive acronym that stands for "GNU's Not Unix". To this end they began to rewrite almost all of the standard Unix commands in the new FSF license referred to as the GNU Public License or GPL.


The GNU Project also wrote it's own operating system kernel known as Hurd however when the first
The GNU Project also wrote it's own operating system kernel known as Hurd however when the first

Latest revision as of 16:53, 25 March 2021

Who am I and what is my background?

My name is Kevin Inscoe. I have been a system administrator professionally since 1984 when I inherited the job as part of my duties working in a software engineering lab for a telecommunications company. I originally started with DEC (Digital Equipment Corporation) OpenVMS operating system but also supported Apollo and HP-UX Unix workstations. In 1993 started supporting Sun Microsystems servers (SunOS 4.1.3) full time and in 1997 I also added IBM AIX to my repertoire. Along the way I have also been a software developer, IT manager and various assorted IT related job functions but find myself always coming back to be what is generally considered a system administrator. I am currently a system engineer for Yoyodyne IT Unix and Systems and Enterprise Storage group. In my spare time I also write programs and contribute back to projects such as Gentoo Linux and Linux-HA a high availability project for Linux clustering. This is my first time teaching this course and my first time teaching a technical courses.

What is this course is about

Quite simply how to make life easier for those use Unix like operating systems here at Yoyodyne by choice or not.

What the course will not cover

  • Advocacy - this course remains agnostic about distributions or operating systems. Which ones are better or worse are not a topic of this discussion.
  • Programming no programming will be discussed however some brief discussion about shell scripting

will be covered.

  • Administration - this is not a course for system administrators or though who maintain Unix systems full time although it could be first step.
  • Networks and networking - It is assumed you already use some other operating system and already know the basics of navigating the interwebs and the corporate networks at Yoyodyne. In fact it is probably most of you are already logging in to Unix systems as part of your jobs already. Remember your busy right? However I will touch on some network tools in the troubleshooting class and of course can answer questions at anytime about networking in general.
  • I am assuming your not nerdcore or a geek, although if you are you can stay. But I will have to upgrade my jokes.
  • Discussion about 'fixing' the way we do things at Yoyodyne IT. We can tackle that topic in other established ways and is not on task for this course.

Who should be here

My assumed audience at Yoyodyne: operations, programmers, software developers and build staff and possibly some end-users.

It probably won't benefit if your a system administrator as you know this stuff already and you will likely heckle me from the sidelines so leave now. Smile.

What should I expect to learn

  • An understanding about the philosophy of Unix derived operating systems.
  • A basic understanding of commands related to everyday use of the Unix operating systems.
  • A good understanding of UNIX(tm) file systems and directory structure.
  • Information on obtaining further help with more advanced and also programming topics.
  • Information on free and open source software and advocacy.

What is Unix and why do I care?

At Yoyodyne Unix based operating systems comprise much of our server platform (two-hundred plus physical servers). Vital company applications such as Oracle and MySQL databases, SAP, Tibco and hundreds of eProduct sites including K-6, ThinkDifferently, HRW and Classapalooza all run on Unix based operating systems.

Unix is every where.

It is embedded in devices like the Playstation 3, it runs traffic control devices throughout the world, it runs the stock exchanges of the world, it's in the Whitehouse and it's on the Space Shuttle.

"One of the questions that comes up all the time is: How
enthusiastic is our support for UNIX?

	Unix was written on our machines and for our machines many
years ago.  Today, much of UNIX being done is done on our machines.
Ten percent of our VAXs are going for UNIX use.  UNIX is a simple
language, easy to understand, easy to get started with.  It's great for
students, great for somewhat casual users, and it's great for
interchanging programs between different machines.  And so, because of
its popularity in these markets, we support it.  We have good UNIX on
VAX and good UNIX on PDP-11s.

	It is our belief, however, that serious professional users will
run out of things they can do with UNIX. They'll want a real system and
will end up doing VMS when they get to be serious about programming.
	With UNIX, if you're looking for something, you can easily and
quickly check that small manual and find out that it's not there.  With
VMS, no matter what you look for - it's literally a five-foot shelf of
documentation - if you look long enough it's there.  That's the
difference - the beauty of UNIX is it's simple; and the beauty of VMS
is that it's all there."

		- Ken Olsen, President of DEC, 1984

A very brief history of Unix

In the beginning there were two Unices (is that a word?):

AT&T and Berkeley (BSD).

http://en.wikipedia.org/wiki/UNIX

http://en.wikipedia.org/wiki/Berkeley_Software_Distribution

While AT&T Bell Labs in New Jersey was working on their Unix, Bill Joy a graduate student at UC Berkeley liked what was going on and brought a tape of the source code from System 6 being built at AT&T. It became known as BSD version 1. He added a Pascal compiler and his own creation the Ex editor.

in 1978 UCB received a VAX and BSD had to be modified to work with virtual memory and 32 bit. This became BSD version 3 and the first ever "fork" took place forever splitting AT&T and BSD apart. The differences between the two are mainly in memory and network management.

Development of BSD caused UCB to win contracts from DARPA and the Department of Defense which pole-vaulted BSD into the limelight. Early vendors (particularly defense contractors) Sun Microsystems, Apollo and Digital Equipment adopted BSD while more business oriented vendors like IBM, Santa Cruz Operation and NCR (and later Sun) chose AT&T's System V derivation now known as Unix System Laboratories. This caused some consternation for developers and software vendors and in 1986 the IEEE started a project in an attempt to unify the two branches of unix called Portable Operating System Interface [for Unix] or POSIX.

At the same time a group known as The Open Standards Group or Open Group for short also had similar goals but for all operating systems including Digital Equipment's VMS which started the craze of renaming operating system with the word "open" and VMS became OpenVMS. Almost all vendors in existence signed to become a member of The Open Group which quickly adopted the POSIX's groups recommendations. in 1983 IBM released the IBM PC AT and the Intel x86 architecture began to take off as a dominant architecture. The IBM PC AT Intel x386 chip finally supported memory page protection long considered vital to a true multi-user system. This lead to a number of variants of BSD such as 386BSD being optimized for the 386 chip-set. It was at this time however that legal troubles began for BSD since portions of BSD still contained AT&T licensed code. BSD now known as BSDi was blocked by a judge from allowing any more copies to be distributed.

It was during this time a Finish computer science student Linux Torvalds was wanting to experiment with a micro operating system that could be run from the IBM 386 and 486 PC's but because of the legal trouble BSDi was having could not use that code base and instead found a fellow computer scientist Andrew S. Tanenbaum started work on his own Unix-like operating system from scratch called Minix. Linux took the Minix operating system and significantly modified it to produce what is now the Linux Kernel. In April 1991 he posted the code and community for formed around this new kernel.

To prevent software from being used on their competitors' computers, most manufacturers stopped distributing source code and began using copyright and restrictive software licenses to limit or prohibit copying and redistribution. Around 1980 an MIT programmer name Richard Stallman dissatisfied with this turn of events was one day denied access to the source code for a printer and this lead to him forming the Free Software Foundation and in 1984 quit MIT to create the GNU Project. The founding goal of the project was, in the words of its initial announcement, to develop "a sufficient body of free software [...] to get along without any software that is not free." To make this happen, the GNU Project began working on an operating system called GNU. GNU is a recursive acronym that stands for "GNU's Not Unix". To this end they began to rewrite almost all of the standard Unix commands in the new FSF license referred to as the GNU Public License or GPL.

The GNU Project also wrote it's own operating system kernel known as Hurd however when the first version of Linux was released in 1994 it was wrapped in GNU Unix commands making it the first ever completely free, open and encumbered operating system causing a rapid proliferation of community forming around it. Little known fact Unix people like beards. I am no different.

http://en.wikipedia.org/wiki/Dennis_Ritchie

http://www.urbandictionary.com/define.php?term=Unix%20beard

Several friends of mine: http://www.s5h.net/unix/unix-beards/

The two unix operating systems in use at Yoyodyne

Solaris release 9 and 10 (a few release 8 legacy systems remain) and Linux.

Differences between Solaris and GNU/Linux

Biggest difference is Linux uses primarily GNU command programs while Sun Solaris uses originally AT&T licensed commands some of now been re-written and some POSIX style commands have also been replaced.

Solaris is homogeneous and single sourced (single effort, single vendor) while Linux is some what ad-hoc and multi-sourced (multiple distributions and vendors). At Yoyodyne we deploy on Redhat Enterprise Linux.

The Unix philosophy

"Unix Airlines: You walk out to the runway and they give you a box of tools and some airplane parts. The passengers form into groups and start building twelve different planes." - Anonymous

http://www.faqs.org/docs/artu/ch01s06.html

Unix is very batch oriented. It is designed (for the most part) without additional user input.

Think mainframes of old.

  • Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
  • Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
  • Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
  • Write programs to work together. Write programs to handle text streams, because that is a universal interface.
  • Rule of Clarity: Clarity is better than cleverness.
  • Rule of Simplicity: Design for simplicity; add complexity only where you must.

KISS - Keep It Simple Silly (or Stupid)

  • Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
  • Rule of Transparency: Design for visibility to make inspection and debugging easier.
  • Rule of Least Surprise: In interface design, always do the least surprising thing.
  • Rule of Silence: When a program has nothing surprising to say, it should say nothing.
  • Rule of Repair: When you must fail, fail noisily and as soon as possible.
  • Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
  • Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.

(awk, grep and sed come to mind)

and my personal favorite:

  • Rule of Diversity: Distrust all claims for "one true way".

There are many way to skin a cat.

  • Commands can be stacked and output directed to files and devices. Remember MS-DOS?

I use Microsoft Windows or Apple Mac now what's different?

If your using a Mac (OS X or 10) you are already using Unix called Mach (Darwin) which is jump-start based on BSD. Parts of FreeBSD and NetBSD exists in it as well. It was a reboot of an environment created by Steve Jobs when he originally left Apple called NEXTstep.

Although Unix has several available graphical interfaces we will be focusing mostly on the command line. However if there is interest I can cover X Windows in the last class. Being command line means obviously no point and click. There are web tools however in some cases ([[1]]). Not widely deployed at Yoyodyne. Many users view Unix as not user-friendly because it's lack of a default, standard GUI. However there was once a standard. Remember the Open Group? They created The Common Desktop Environment refereed to as CDE long before Windows 95. It was dreadfully awful. Windows won the day. Today no one uses it. Although I did give the old "college try" for several years. Kevin will usually rant here about how the desktop is dead. Just smile and nod and it will all be over soon.

"Unix is user-friendly, it's just picky about who it's friends are." - Anonymous

You will be working with file systems which are analogous to drive letters in Windows and work similarly. But instead of drive letters (C:) you have mount points (/home for example)

Another main difference between UNIX and Windows is the process hierarchy which UNIX possesses. When a new process is created by a UNIX application, it becomes a child of the process that created it. This hierarchy is very important, so there are system calls for influencing child processes. Windows processes on the other hand do not share a hierarchical relationship. Receiving the process handle and ID of the process it created, the creating process of a Windows system can maintain or simulate a hierarchical relationship if it is needed. The Windows operating system ordinarily treats all processes as belonging to the same generation.

Unix uses processes known as daemons which act as service processors, Windows has service processes. Daemons are processes that are started when Unix boots up that provide services to other applications. Daemons typically do not interact with users and run as a service account. A Windows service is the equivalent to a Unix daemon. When a Windows system is booted, a service may be started. This is a long running application that does not interact with users, so they do not have a user interface. Services continue running during a logon session and they are controlled by the Windows Service Control Manager.

A Unix program often has one small, specific task and works well together with other programs.

Unix is mostly text-centric, the CLI or command line interface is important. Of course there are powerful GUI (graphical user interfaces) available.

Unix and Windows use completely different paradigms for run-time loading of code.

In Unix, a shared object (.so) file contains code to be used by the program, and also the names of functions and data that it expects to find in the program. When the file is joined to the program, all references to those functions and data in the file's code are changed to point to the actual locations in the program where the functions and data are placed in memory. This is basically a link operation.

In Windows, a dynamic-link library (.dll) file has no dangling references. Instead, an access to functions or data goes through a look-up table. So the DLL code does not have to be fixed up at run-time to refer to the program's memory; instead, the code already uses the DLL's lookup table, and the look-up table is modified at run-time to point to the functions and data.

In Unix, there is only one type of library file (.a) which contains code from several object files (.o). During the link step to create a shared object file (.so), the linker may find that it doesn't know where an identifier is defined. The linker will look for it in the object files in the libraries; if it finds it, it will include all the code from that object file.

In Windows, there are two types of library, a static library and an import library (both called .lib). A static library is like a Unix .a file; it contains code to be included as necessary. An import library is basically used only to reassure the linker that a certain identifier is legal, and will be present in the program when the DLL is loaded. So the linker uses the information from the import library to build the look-up table for using identifiers that are not included in the DLL. When an application or a DLL is linked, an import library may be generated, which will need to be used for all future DLLs that depend on the symbols in the application or DLL.

Suppose you are building two dynamic-load modules, B and C, which should share another block of code A. On Unix, you would not pass A.a to the linker for B.so and C.so; that would cause it to be included twice, so that B and C would each have their own copy. In Windows, building A.dll will also build A.lib. You do pass A.lib to the linker for B and C. A.lib does not contain code; it just contains information which will be used at run-time to access A's code.

In Windows, using an import library is sort of like using "import A"; it gives you access to A's names, but does not create a separate copy. On Unix, linking with a library is more like "from A import *"; it does create a separate copy.

Linux distributions typically provides two GUIs, KDE and Gnome however there are many more available that can be installed or built from source.

Yes there will be labs!

You should have account on the two lab systems if you did not receive email notify Kevin Inscoe.

Questions