Thursday, February 24, 2011

Graphics Workstation

A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.


Computer intended for use by one person, but with a much faster processor and more memory than an ordinary personal computer. Workstations are designed for powerful business applications that do large numbers of calculations or require high-speed graphical displays.


Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.

 The term workstation is also sometimes used to mean a personal computer connected to a mainframe computer, to distinguish it from “dumb” display terminals with limited applications.

Workstation class graphics include:
  • Advanced technology including full floating point precision for superior image quality, high performance for large datasets and dual display/ultra-high resolution support.
  • Application certification for OpenGL and DirectX support on Windows® and Linux® .
  • Robust software and hardware compatibility testing to maximize uptime and productivity.
  • Support for workstation users with direct links to engineering resources.
refer : http://www.dell.com/content/topics/global.aspx/solutions/en/precision_graphics?c=us&l=en&cs=18


SHORT NOTE: INPUT DEVICES

Input Devices

  Any hardware device that sends data to the
computer, without any input devices, a
computer would only be a display device
and not allow users to interact with it,
much like a TV. To the right is a
Logitech trackball mouse, an example of an
input device. Below is a full listing of
all the different computer input devices
found on a computer.

Midpoint circle algorithm (explanation)

The midpoint circle algorithm is an algorithm used to determine the points needed for drawing a circle. The algorithm is a variant of  Bresenham's line algorithm, and is thus sometimes known as Bresenham's circle algorithm, although not actually invented by Bresenham.

 Advantages:
The midpoint method is used for deriving efficient scan-conversion algorithms to draw geometric curves on raster displays.
 The method is general and is used to transform the nonparametric equation f(x,y) = 0, which describes the curve, into an algorithms that draws the curve.


Disadvantages:
-time consumption is high
-the distance between the pixels is not equal so we wont get smooth circle.

The algorithm starts accordingly with the circle equation x2 + y2 = r2. So, the center of the circle is located at (0,0). We consider first only the first octant and draw a curve which starts at point (r,0) and proceeds upwards and to the left, reaching the angle of 45°.
The "fast" direction here is the y direction. The algorithm always does a step in the positive y direction (upwards), and every now and then also has to do a step in the "slow" direction, the negative x direction.





refer : http://en.wikipedia.org/wiki/Midpoint_circle_algorithm

Tuesday, February 22, 2011

OS-ASSEMBLER

Definition -

An assembler is a program that takes basic computer instructions and converts them into a pattern of  bits that the computer's processor can use to perform its basic operations.
It is a computer program to translate between lower-level representations of computer programs.


Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities.
The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution—e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. Modern assemblers, especially for RISC architectures, such as SPARC or POWER, as well as x86 and x86-64, optimize Instruction scheduling to exploit the CPU pipeline efficiently.

A program written in assembly language consists of a series of mnemonic statements and meta-statements (known variously as directives, pseudo-instructions and pseudo-ops), comments and data. These are translated by an assembler to a stream of executable instructions that can be loaded into memory and executed. Assemblers can also be used to produce blocks of data, from formatted and commented source code, to be used by other code.

OS-COMPILER

A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code). The most common reason for wanting to transform source code is to create an executable program.

The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language (e.g., assembly language or machine code).



refer:     http://www.personal.kent.edu/~rmuhamma/Compilers/MyCompiler/compilerIntro.htm
             http://www.webopedia.com/TERM/C/compiler.html
             http://whatis.techtarget.com/definition/0,,sid9_gci211824,00.html


OS-LOADER

LOADER:

 A loader is the part of an operating system that is responsible for loading programs, one of the essential stages in the process of starting a program, it means loader is a program that places programs into memory and prepares them for execution. Loading a program involves reading the contents of executable file, the file containing the program text, into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code.

LOADER is an operating system utility that copies programs from a storage device to main memory, where they can be executed. In addition to copying a program into main memory, the loader can also replace virtual addresses with physical addresses.
Most loaders are transparent, i.e., you cannot directly execute them, but the operating system uses them when necessary.

In a computer operating system , a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk ), loads it into main storage (in a personal computer, it's called random access memory ), and gives that program control of the computer (allows it to execute its instructions).
A program that is loaded may itself contain components that are not initially loaded into main storage, but can be loaded if and when their logic is needed. In a multitasking operating system, a program that is sometimes called a dispatcher juggles the computer processor's time among different tasks and calls the loader when a program associated with a task is not already in main storage. (By program here, we mean a binary file that is the result of a programming language compilation, linkage editing, or some other program preparation process.)




Monday, February 21, 2011

OS- LINKER

  A linker or link editor is a program that takes one or more objects generated by a compiler and combines them into a single executable program.

Many operating system environments allow dynamic linking, that is the postponing of the resolving of some undefined symbols until a program is run. That means that the executable code still contains undefined symbols, plus a list of objects or libraries that will provide definitions for these.

Linkers can take objects from a collection called a library. Some linkers do not include the whole library in the output; they only include its symbols that are referenced from other object files or libraries. Libraries exist for diverse purposes, and one or more system libraries are usually linked in by default.

The linker also takes care of arranging the objects in a program's address space. This may involve relocating code that assumes a specific base address to another base. Since a compiler seldom knows where an object will reside, it often assumes a fixed base location (for example, zero). 

Linking is the process of combining various pieces of code and data together to form a single executable that can be loaded in memory.
Linking can be done at compile time, at load time (by loaders) and also at run time (by application programs). The process of linking dates back to late 1940s, when it was done manually. Now, we have linkers that support complex features, such as dynamically linked shared libraries.

Linkers and loaders perform various related but conceptually different tasks:
  • Program Loading.:  This refers to copying a program image from hard disk to the main memory in order to put the program in a ready-to-run state. In some cases, program loading also might involve allocating storage space or mapping virtual addresses to disk pages.
  • Relocation.:  Compilers and assemblers generate the object code for each input module with a starting address of zero. Relocation is the process of assigning load addresses to different parts of the program by merging all sections of the same type into one section. The code and data section also are adjusted so they point to the correct runtime addresses.
  • Symbol Resolution.: A program is made up of multiple subprograms; reference of one subprogram to another is made through symbols. A linker's job is to resolve the reference by noting the symbol's location and patching the caller's object code.

    So a considerable overlap exists between the functions of linkers and loaders.
    One way to think of them is: the loader does the program loading; the linker does the symbol resolution; and either of them can do the relocation.

Sunday, February 20, 2011

OS- CONTEXT SWITCHING

A context switch is the computing process of storing and restoring state (context) of a CPU so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU. The context switch is an essential feature of a multitasking operating system.

A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch.


The scheduler is the part of the operating systems that manage context switching, it perform context switching in one of the following conditions:
  1. Multitasking: Within a preemptive multitasking operating system, the scheduler allows every task (according to its priority level) to run for some certain amount of time, called its time slice where a timer interrupt triggers the operating system to schedule another process for execution instead.
    If a process will wait for one of the computer resources or will perform an I/O operation, the operating system schedules another process for execution instead.
  2. Interrupt handling: Some CPU architectures (like the Intel x86 architecture) are interrupt driven. When an interrupt occurs, the scheduler calls its interrupt handler in order to serve the interrupt after switching contexts; the scheduler suspended the currently running process till executing the interrupt handler.
  3. User and kernel mode switching: Context switching can be described in slightly more detail as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. When a transition between user mode and kernel mode is required in an operating system, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.
for more details visit:   http://www.linfo.org/context_switch.html
                                  http://simple.wikipedia.org/wiki/Context_switch
                                  http://en.wikipedia.org/wiki/Context_switch









OS- Spooling and Buffering

SPOOLING:


Acronym for simultaneous peripheral operations on-line, spooling refers to putting jobs in a buffer, a special area in memory or on a disk where a device can access them when it is ready. Spooling is useful because devices access data at different rates. The buffer provides a waiting station where data can rest while the slower device catches up.

pls do refer :        http://www.webopedia.com/TERM/S/spooling.html
                           http://www.techterms.com/definition/spooling

Spooling refers to a process of transferring data by placing it in a temporary working area where another program may access it for processing at a later point in time.

Spooling refers to copying files in parallel with other work. The most common use is in reading files used by a job into or writing them from a buffer on a magnetic tape or a disk. Spooling is useful because devices access data at different rates.

The most common spooling application is print spooling .



BUFFERING:

buffer- A temporary storage area, usually in RAM. The purpose of most buffers is to act as a holding area, enabling the CPU to manipulate data before transferring it to a device.

Buffers are commonly used when burning data onto a compact disc, where the data is transferred to the buffer before being written to the disc.
Another common use of buffers is for printing documents. When you enter a PRINT command, the operating system copies your document to a print buffer (a free area in memory or on a disk) from which the printer can draw characters at its own pace. This frees the computer to perform other tasks while the printer is running in the background. Print buffering is called spooling.


A Buffer is a region of memory used to temporarily hold data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a keyboard) or just before it is sent to an output device (such as a printer). However, a buffer may be used when moving data between processes within a computer. This is comparable to buffers in telecommunication. Buffers can be implemented in either hardware or software, but the vast majority of buffers are implemented in software. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler.

Buffering: is reading and writing data from hdd takes long time.so to improve the speed for data processing the data next required by processor is stored is cache memory or cpu register.for e.g. to cut certain line from text file to copy into another file.cut data get stored in to buffer (cpu register) to get back stored into another file.



OS- MULTIPROCESSING

Multiprocessing is running a system with more than one processor.
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.

Multiprocessing can be said to be either asymmetric or symmetric.
Asymmetric multiprocessing designates some processors to perform system tasks only, and others to run applications only. This is a rigid design that results in lost performance during those times when the computer needs to run many system tasks and no user tasks, or vice versa.
Symmetric multiprocessing, often abbreviated SMP, allows either system or user tasks to run on any processor, which is more flexible and therefore leads to better performance. SMP is what most multiprocessing PC motherboards use.

Multiprocessing operating systems enable several programs to run concurrently.
UNIX is one of the most widely used multiprocessing systems.

A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two.

 For more details refer : http://en.wikipedia.org/wiki/Multiprocessing

Saturday, February 19, 2011

OS- MULTITASKING


DEFINITION - 

Multitasking, in an operating system, is allowing a user to perform more than one computer task (such as the operation of an application program) at a time. The operating system is able to keep track of where you are in these tasks and go from one to the other without losing information.


Multitasking operating systems allow more than one program to run at a time. They can support either preemptive multitasking, where the OS *doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits for the program to give back control (Windows 3.x, Mac OS 9 and earlier).

A multitasking operating system is any type of system that is capable of running more than one program at a time. Most modern operating systems are configured to handle multiple programs simultaneously, with the exception of some privately developed systems that are designed for use in specific business settings. As with most types of communications technology, the multitasking operating system has evolved over time, and is likely to continue evolving as communication demands keep growing in many cultures.



 for more details visit :      http://en.wikipedia.org/wiki/Computer_multitasking


*doles = distributes something.

OS- MULTIPROGRAMMING

Multiprogramming is one of the more basic types of parallel processing that can be employed in many different environments. Essentially, multiprogramming makes it possible for several programs to be active at the same time, while still running through a single processor.

Multiprogramming is very different from the multiprocessing because even though there may be several programs currently active, the uniprocessor is not simultaneously executing commands for all the programs. Instead, the processor addresses each program, executes a single command, then moves on to the next program in the queue. The previous program remains active, but enters into a passive state until the uniprocessor returns to the front of the queue and executes a second command. 

Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is done only when the currently running process requests I/O, or terminates. It was commonly used to keep the CPU busy while one or more processes are doing I/O. It is now mostly
 super ceded by multitasking, in which processes also lose the CPU when their time quantum expires.
Multiprogramming makes efficient use of the CPU by overlapping the demands for the CPU and its I/O devices from various users. It attempts to increase CPU utilization by always having something for the CPU to execute.
The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100% busy. This of course assumes the major delay
is the wait while data is copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).

Friday, February 18, 2011

MCA timetable..



MON
TUES
WED
THURS
FRI
9:15
CS (Pracs-BOTH BATCHES)
DS
P&S
OS
OS (PRACS-2)
DS (PRACS-1)
10:15
CS (Lec- Shirin maam)
DS
CS
DS

11:15
CS (Lec- Rashmi maam)
OS
OS (PRACS-1)
CG (PRACS-2)


12:40
FM
CG

CG (PRACS-1)
DS (PRACS-2)

DS
1:40
OS
FM

CG
DS
2:40
P&S
FM
FM
CG
CG
3:40
P&S
FM (TUT)
OS

P&S (TUT)

Saturday, February 12, 2011

Why bresenham is advantageous than DDA algo.

Disadvantage of DDA:

The accumulation of round of error is successive addition of the floating point increments is used to find the pixel position but it take lot of time to compute the pixel position.


Advantages of bresenham's line drawing algorithm..


The Bresenham line algorithm has the following advantages:
– An fast incremental algorithm
– Uses only integer calculations


The Bresenham algorithm is another incremental scan conversion algorithm
The big advantage of this algorithm is that it uses only integer calculations such as addition/subtraction and bit shifting.
The main advantage of Bresenham's algorithm is speed.

The disadvantage of such a simple algorithm is that it is meant for basic line drawing. The "advanced" topic of antialiasing isn't part of Bresenham's algorithm, so to draw smooth lines, you'd want to look into a different algorithm.