-------------

Thursday, March 29, 2012

What is RAM and the difference between RAM, Virtual Memory & SWAP

------------------------------------------------------------------------------------------

RAM

       RAM (Random Access Memory) is the place in a computer where the operating system, application programs, and data in current use are kept so that they can be quickly reached by the computer's processor. RAM is much faster to read from and write to than the other kinds of storage in a computer, the hard disk, floppy disk, and CD-ROM. However, the data in RAM stays there only as long as your computer is running. When you turn the computer off, RAM loses its data. When you turn your computer on again, your operating system and other files are once again loaded into RAM, usually from your hard disk.

Types of RAM
       The two main forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In static RAM, a bit of data is stored using the state of a flip-flop. This form of RAM is more expensive to produce, but is generally faster and requires less power than DRAM and, in modern computers, is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a memory cell. The capacitor
holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, Read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, etc. As of 2007, NAND flash has begun to replace older forms of persistent storage, such as magnetic disks and tapes, while NOR flash is being used in place of ROM in netbooks and rugged computers, since it is capable of true random access, allowing direct code execution.
ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction code.
In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, like CD-RW, a rewriteable DVD must be erased before it can be rewritten.


How computer RAM works?
     Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information -- a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.
      RAM stands for Random Access Memory. This means Information can be retrieve and store by the computer at any order. RAM gives your computer a temporary place to process electronic data. This means that, RAM chips continue to store information only as long as computer has electrical power. In other words, when you shut off your computer, all the data stored in RAM are lost.
All actual computing starts with the the CPU (Central Processing Unit).




Virtual Memory

     Virtual memory is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost.
     Virtual memory is, what your computer uses it for and how to configure it on your own machine to achieve optimal performance. Most computers today have something like 32 or 64 megabytes of RAM available for the CPU to use (see How RAM Works for details on RAM). Unfortunately, that amount of RAM is not enough to run all of the programs that most users expect to run at once.
     For example, if you load the operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 32 megabytes is not enough to hold it all. If there were no such thing as virtual memory, then once you filled up the available RAM your computer would have to say, "Sorry, you can not load any more applications. Please close another application to load a new one." With virtual memory, what the computer can do is look at RAM for areas that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application.
      Because this copying happens automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it only has 32 megabytes installed. Because hard disk space is so much cheaper than RAM chips, it also has a nice economic benefit. ­
The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
     When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.

     The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. On a Windows machine, page files have a .SWP extension.




SWAP

      Linux divides its physical RAM (random access memory) into chucks of memory called pages. Swapping is the process whereby a page of memory is copied to the preconfigured space on the hard disk, called swap space, to free up that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available.
     Swapping is necessary for two important reasons. First, when the system requires more memory than is physically available, the kernel swaps out less used pages and gives memory to the current application (process) that needs the memory immediately. Second, a significant number of the pages used by an application during its startup phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache.
  
   However, swapping does have a downside. Compared to memory, disks are very slow. Memory speeds can be measured in nanoseconds, while disks are measured in milliseconds, so accessing the disk can be tens of thousands times slower than accessing physical memory. The more swapping that occurs, the slower your system will be. Sometimes excessive swapping or thrashing occurs where a page is swapped out and then very soon swapped in and then swapped out again and so on. In such situations the system is struggling to find free memory and keep applications running at the same time. In this case only adding more RAM will help.

     Linux has two forms of swap space: the swap partition and the swap file. The swap partition is an independent section of the hard disk used solely for swapping; no other files can reside there. The swap file is a special file in the filesystem that resides amongst your system and data files.
READ MORE - What is RAM and the difference between RAM, Virtual Memory & SWAP --------------------------------------------------

Friday, March 23, 2012

Master Boot Record ( MBR )

------------------------------------------------------------------------------------------


   

 The Master Boot Record (MBR) is the information in the first sector of any hard disk or diskette that identifies how and where an operating system is located so that it can be boot(loaded) into the computer's main storage or random access memory. The Master Boot Record is also sometimes called the "partition sector" or the "master partition table" because it includes a table that locates each partition that the hard disk has been formatted into. In addition to this table, the MBR also includes a program that reads the boot sector record of the partition containing the operating system to be booted into RAM. In turn, that record contains a program that loads the rest of the operating system into RAM. 


When you turn on your PC, the processor has to begin processing. However, your system memory is empty, and the processor doesn't have anything to execute, or really even know where it is. To ensure that the PC can always boot regardless of which BIOS is in the machine, chip makers and BIOS manufacturers arrange so that the processor, once turned on, always starts executing at the same place, FFFF0h. This is discussed in much more detail here.

In a similar manner, every hard disk must have a consistent "starting point" where key information is stored about the disk, such as how many partitions it has, what sort of partitions they are, etc. There also needs to be somewhere that the BIOS can load the initial boot program that starts the process of loading the operating system. The place where this information is stored is called the master boot record (MBR). It is also sometimes called the master boot sector or even just the boot sector. (Though the master boot sector should not be confused with volume boot sectors, which are different.)


The master boot record is always located at cylinder 0, head 0, and sector 1, the first sector on the disk (see here for more on these disk geometry terms). This is the consistent "starting point" that the disk always uses. When the BIOS boots the machine, it will look here for instructions and information on how to boot the disk and load the operating system. The master boot record contains the following structures :
  1. Master Partition Table : This small table contains the descriptions of the partitions that are contained on the hard disk. There is only room in the master partition table for the information describing four partitions. Therefore, a hard disk can have only four true partitions, also called primary partitions. Any additional partitions are logical partitions that are linked to one of the primary partitions. Partitions are discussed here. One of the partitions is marked as active, indicating that it is the one that the computer should use for booting up.
  2. Master Boot Code : The master boot record contains the small initial boot program that the BIOS loads and executes to start the boot process. This program eventually transfers control to the boot program stored on whichever partition is used for booting the PC.

Due to the great importance of the information stored in the master boot record, if it ever becomes damaged or corrupted in some way, serious data loss can be--in fact, often will be--the result. Since the master boot code is the first program executed when you turn on your PC, this is a favorite place for virus writers to target.
READ MORE - Master Boot Record ( MBR ) --------------------------------------------------

Difference of several file systems

------------------------------------------------------------------------------------------
1. FAT 16 (File Allocation Table 16)


Actually before FAT16, notified before any file systems on MS-DOS that is FAT12, but because many lack the emerging FAT16. FAT16 itself was introduced by MS-DOS in 1981. Initially, the system is designed to manage file floppy disk, and has been amended several times, so it is used to set harddrive file. FAT16 is compatible advantage in almost every operating system, either Windows 95/98/Me, OS / 2, Linux and even Unix. But behind it all the biggest problem of FAT16 is having the capacity of a fixed number of clusters in the partition. So the bigger hard drive, the greater the cluster size. In addition, the lack of one FAT16 does not support compression, encryption and access control in a partition

2. FAT 32 (File Allocation Table 32)

FAT32 system began in the know on the Windows 95 SP2 and the development of more than FAT16. FAT32 offers the ability to accommodate a larger number of clusters in the partition. It also developed the ability to drive is better than FAT16.However FAT32 has a weakness that is not owned FAT16 is limited Operating System that can recognize FAT32. Unlike FAT16 can be known by virtually all operating systems. But it does not matter if you are running FAT32 on Windows XP because Windows XP does not matter what file system in use on that partition.

3. NTFS (New Technology File System)


NTFS on first introduced in Windows NT and the file system that is completely different compared to FAT technology. NTFS offers better security, file compression, clusters, and even support data encryption. NTFS is the default file system for Windows XP and Windows if you do a regular upgrade you will be asked whether you want to upgrade to NTFS or keep using FAT. But if you are upgrading to Windows XP and NTFS do not make changes did not matter because you can convert it to NTFS at any time. But remember that if you are using NTFS will arise a problem if you want to downgrade to FAT without losing data.

On NTFS generally not compatible with other Operating System installed on the same computer (Double OS) is not detected even when you do a StartUp Boot using the floppy. For it is recommended to you to provide a small partition that uses the FAT file system at the beginning of the partition. This partition allows you to save the Recovery Tool if get into trouble.

 

4. Ext2 - Second Extended File System 2


Ext2 was first developed and integrated in the Linux kernel, and is now also being developed for use on other operating systems. The goal is to create a powerful file system, which can implement those files from UNIX semantics, and has an advanced service features.


Abilities :

  • Ext2 File system capable of supporting multiple file types from UNIX standard, such as regular files, directories, device special files and symbolic links.
  • Able to set the Ext2 file-system file that is created in a large partition.
  • Ext2 File system capable of generating file names are long. Maximum of 255 characters.
  • Ext2 require several blocks to super user (root).


5. Ext3 - Third Extended File System


Ext3 is a journalled file system, journalled file system designed to help protect data in it. With a journalled filesystem, then we no longer need to check the consistency of data, which will take very long for my disk.

Ext3 is a filesystem that was developed for use on the Linux operating system.Ext3 is the result of improvement of Ext2 Ext2 into better shape by adding a variety of advantages.


Excess :

  • Ext3 does not support the process of checking the file system, even when the system is not cleaned experiencing "shutdown", except in some very rare hardware errors.
  • Things like this happen because the data is written or stored into a disk in a way that its file system is always consistent.
  • The time required to recover EXT3 file system after the system is not cleaned off
  • Regardless of the size of the file system or file number, but depends on the size of the "journal" used to maintain consistency. Journal of the size of the initial (default)
  • It takes about 1 second to recover (depending on the speed of hardware).

Ext2 and Ext3 comparison :

  • In general, the principles of the Ext2 with Ext3.
  • Method of file access, data security, and the use of disk space between the file system is almost the same.
  • The fundamental difference between the two file systems is the concept of journaling file system that is used in Ext3.
  • The concept of this journaling Ext2 and Ext3 has led to a difference in terms of durability and data recovery from damage.
  • The concept of Ext3 journaling is causing a lot faster than Ext2 in performing data recovery from damage.
  • Ext3 does not support the process of checking the file system, even when the system is not cleaned experiencing "shutdown", except in some very rare hardware errors.
  • Things like this happen because the data is written or stored into a disk in a way that its file system is always consistent.
  • The time required to recover EXT3 file system after the system is not cleaned off
  • Regardless of the size of the file system or file number, but depends on the size of the "journal" used to maintain consistency. Journal of the size of the initial (default)
  • It takes about 1 second to recover (depending on the speed of hardware).


READ MORE - Difference of several file systems --------------------------------------------------

File System

------------------------------------------------------------------------------------------



In a computer, a file system (sometimes written filesystem) is the way in which files are named and where they are placed logically for storage and retrieval. The DOS, Windows, OS/2, Macintosh, and UNIX-based operating systems all have file systems in which files are placed somewhere in a hierarchical (tree) structure. A file is placed in a directory (folder in Windows) or subdirectory at the desired place in the tree structure.
  • File systems specify conventions for naming files. These conventions include the maximum number of characters in a name, which characters can be used, and, in some systems, how long the file name suffix can be. A file system also includes a format for specifying the path to a file through the structure of directories.

  • Sometimes the term refers to the part of an operating system or an added-on program that supports a file system as defined in (1). Examples of such add-on file systems include the Network File System (NFS) and the Andrew file system (AFS).

  • In the specialized lingo of storage professionals, a file system is the hardware used for nonvolatile storage , the software application that controls the hardware, and the architecture of both the hardware and software.

READ MORE - File System --------------------------------------------------

Monday, March 5, 2012

The Key Ingredients of the OS

------------------------------------------------------------------------------------------

         In computing, the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components).Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.      
  
        Operating system software on the computer is the first layer is placed on computer memory, (computer memory in this case there is the hard drive, not the ram memory) when the computer starts up. While other software is run after the computer operating system is running, and the Operating System will perform the core public service for the software.

         Common core services such as access to the disk, memory management, task scheduling, and user interface. So that each software no longer have a common core tasks, as can be serviced and performed by the operating system. Section of code that performs the core tasks and the general was named by an Operating System kernel.

Kernel  (core operating system)

         In a computing device, the kernel is the core component of the System Operations running in the device. Kernel is responsible for managing resource sharing systems; communication between hardware and software components.
           Kernel link between software applications and computer hardware. Kernel provides the lowest level abstraction layer for resource-resource such as memory, processor and device I / O in which a software application should control the resource-resource in order to function. The kernel is able to provide facilities this kind of application processes through the mechanism of IPC (Inter Process Communication) and system calls.



Kernel basic facilities

     The kernel's primary function is to manage the computer's resources and allow other programs to run and use these resources. Typically, the resources consist of:

  • The Central Processing Unit. This is the most central part of a computer system, responsible for running or executing programs. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at a time)
  • The computer's memory. Memory is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is available.
  • Any Input/Output devices present in the computer, such as keyboard, mouse, disk drives, printers, displays, etc. The kernel allocates requests from applications to perform I/O to an appropriate device (or subsection of a device, in the case of files on a disk or windows on a display) and provides convenient methods for using the device (typically abstracted to the point where the application does not need to know implementation details of the device).

 Issues of kernel support for protection

          An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviors (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.
            An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security.
        The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level. In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support. 

Hardware-based protection or language-based protection

       Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods.
        An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.
Advantages of this approach include:
  • No need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space.
  • Flexibility. Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme (e.g. from a hierarchical system to a capability-based one) do not require new hardware.
Disadvantages include:
  • Longer application start up time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode.
  • Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe. Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.
Examples of systems with language-based protection include JX and Microsoft's Singularity.


Monolithic kernels

       In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson, maintain that it is "easier to implement a monolithic kernel" than microkernels. The main disadvantages of monolithic kernels are the dependencies between system components — a bug in a device driver might crash the entire system — and the fact that large kernels can become very difficult to maintain.


 Microkernels

        Microkernel (also abbreviated μK or uK) is the term describing an approach to Operating System design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space". A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication.
The very essence of the microkernel architecture illustrates some of its advantages:
  • Maintenance is generally easier.
  • Patches can be tested in a separate instance, and then swapped in to take over a production instance.
  • Rapid development time and new software can be tested without having to reboot the kernel.
  • More persistence in general, if one instance goes hay-wire, it is often possible to substitute it with an operational mirror.


Hybrid (or) Modular kernels

        Hybrid kernels are used in most commercial operating systems such as Microsoft Windows NT, 2000, XP, Vista, and 7. Apple Inc's own Mac OS X uses a hybrid kernel called XNU which is based upon code from Carnegie Mellon's Mach kernel and FreeBSD's monolithic kernel. They are similar to micro kernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. 

 

Unix

        In the Unix model, the Operating System consists of two parts; first, the huge collection of utility programs that drive most operations, the other the kernel that runs the programs.Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program, running in supervisor mode, that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space.

        Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in its many distributions as well as the Berkeley software distribution variant kernels such as FreeBSD, DragonflyBSD, OpenBSD, NetBSD, and Mac OS X. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them.    

 

Microsoft Windows

      Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, culminating with release of the Windows 9x series (upgrading the system's capabilities to 32-bit addressing and pre-emptive multitasking) through the mid 1990s and ending with the release of Windows Me in 2000. Microsoft also developed Windows NT, an operating system intended for high-end and business users. This line started with the release of Windows NT 3.1 in 1993, and has continued through the years of 2000 with Windows 7 and Windows Server 2008. 

     The release of Windows XP in October 2001 brought these two product lines together, with the intent of combining the stability of the NT kernel with consumer features from the 9x series. The architecture of Windows NT's kernel is considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Manager, but several subsystems run in user mode. The precise breakdown of user mode and kernel mode components has changed from release to release, but the introduction of the User Mode Driver Framework in Windows Vista, and user-mode thread scheduling in Windows 7, have brought more kernel-mode functionality into user-mode processes.

       Windows makes use of the FAT, NTFS, exFAT and ReFS file systems (the latter is only supported and usable in Windows Server 8; Windows cannot boot from it).
Windows uses a drive letter abstraction at the user level to distinguish one disk or partition from another. For example, the path C:\WINDOWS represents a directory WINDOWS on the partition represented by the letter C. Drive C: is most commonly used for the primary hard disk partition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs came about in older applications which made assumptions that the drive that the operating system was installed on was C. The use of drive letters, and the tradition of using "C" as the drive letter for the primary hard disk partition, can be traced to MS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived from CP/M in the 1970s, and ultimately from IBM's CP/CMS of 1967.

Windows Registry
    The Windows Registry is a hierarchical database that stores configuration settings and options on Microsoft Windows operating systems. It contains settings for low-level operating system components as well as the applications running on the platform: the kernel, device drivers, services, SAM, user interface and third party applications all make use of the registry. The registry also provides a means to access counters for profiling system performance. When first introduced with Windows 3.1, the Windows registry's primary purpose was to store configuration information for COM-based components. With the introduction of Windows 95 and Windows NT, its use was extended to tidy up the profusion of per-program INI files that had previously been used to store configuration settings for Windows programs.

Structure
Keys and values
        The registry contains two basic elements: keys and values. Registry keys are similar to folders  in addition to values, each key can contain subkeys, which may contain further subkeys, and so on. Keys are referenced with a syntax similar to Windows' path names, using backslashes to indicate levels of hierarchy. Each subkey has a mandatory name, is a non-empty string that cannot contain any backslash, and whose letter case is insignificant. The hierarchy of registry keys can only be accessed from a known root key handle (which is anonymous but whose effective value is a constant numeric handle) that is mapped to the content of a registry key preloaded by the kernel from a stored "hive", or to the content of a subkey within another root key, or mapped to a registered service or DLL that provide access to its contained subkeys and values.
       HKEY_LOCAL_MACHINE\Software\Microsoft\Windows refers to the subkey "Windows" of the subkey "Microsoft" of the subkey "Software" of the HKEY_LOCAL_MACHINE root key. There are seven predefined root keys, traditionally named according to their constant handles defined in the Win32 API, or by synonymous abbreviations (depending on applications):
  • HKEY_LOCAL_MACHINE or HKLM
  • HKEY_CURRENT_CONFIG or HKCC (only in Windows 9x/Me and NT-based versions of Windows)
  • HKEY_CLASSES_ROOT or HKCR
  • HKEY_CURRENT_USER or HKCU
  • HKEY_USERS or HKU
  • HKEY_PERFORMANCE_DATA (only in NT-based versions of Windows, but invisible in the Windows Registry Editor)
  • HKEY_DYN_DATA (only in Windows 9x/Me, and visible in the Windows Registry Editor






  

Mac OS

       Apple Computer first launched Mac OS in 1984, bundled with its Apple Macintosh personal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, Mac OS X is based on Darwin, which uses a hybrid kernel called XNU, which was created combining the 4.3BSD kernel and the Mach kernel.

Mac OS X 

          Mac OS X uses a file system that it inherited from classic Mac OS called HFS Plus, sometimes called Mac OS Extended. HFS Plus is a metadata-rich and case preserving file system. Due to the Unix roots of Mac OS X, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter.
          Filenames can be up to 255 characters. HFS Plus uses Unicode to store filenames. On Mac OS X, the filetype can come from the type code, stored in file's metadata, or the filename extension.
HFS Plus has three kinds of links: Unix-style hard links, Unix-style symbolic links and aliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code in userland.
Mac OS X also supports the UFS file system, derived from the BSD Unix Fast File System via NeXTSTEP. However, as of Mac OS X 10.5 (Leopard), Mac OS X can no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard.
Newer versions Mac OS X are capable of reading and writing to the legacy FAT file systems(16 & 32). They are capable of reading NTFS Filesystems. Writing is only supported on Mac OS X 10.6 (Snow Leopard) and later but only after a non-trivial system setting change. Third party software exists that automates this. Third party software is still necessary to write to the NTFS file system on Mac OS X versions prior to 10.6 (Snow Leopard).
       Mac OS X 10.3 and later include read-only support for NTFS-formatted partitions. The GPL-licensed NTFS-3G also works on Mac OS X through FUSE and allows reading and writing to NTFS partitions. A performance enhanced commercial version, called Tuxera NTFS for Mac, is also available from the NTFS-3G developers. Paragon Software Group sells a read-write driver named NTFS for Mac OS X, which is also included on some models of Seagate hard drives. Native NTFS write support has been discovered in Mac OS X 10.6 and later, but is not activated by default, although hacks do exist to enable the functionality. However, user reports indicate the functionality is unstable and tends to cause kernel panics, probably the reason why write support has not been enabled or advertised.






READ MORE - The Key Ingredients of the OS --------------------------------------------------