over the past 20 years, the computer industry has re-invented
itself several times. In the late 1970’s most business in North
America was using a form of mainframe computer architecture
which was created by IBM engineers in the 1960’s and
refined by technology inventors including Honeywell, Burroughs,
Digital Equipment, Hitachi Data Systems, and others. These
systems used simple operating systems to run a single version of an
application to handle mostly accounting applications.
As new technology for chip design was patented, developed for mass
manufacturing, it was packaged in small computers for use by small
business. This provided an explosive opportunity for more people to
become introduced to the benefits of automated accounting, which saved
people time and increased the accuracy of information. During the early
1980’s, the industry thrived with the creation of new business applications
for use in municipal governments, hospitals, classroom education,
building construction, and engineering.
The pace of new technology increased again as the Personal Computer
model was invented by Apple Computers and IBM. With Microsoft software
for Operating Systems, Spreadsheets and Word Processing,
individuals could use a computer for daily information. People in all
walks of life began creating applications to simplify cooking, writing,
homework and any task imaginable.
With the use of 3 different methods of computing: mainframe, distributed
and personal systems, many organizations changed their methods
of business to achieve more efficiencies, growth and profit. This was
also fueled by the business trend to decentralize operations, outsource
processing and empower people to run their own piece of the
organization.
In the late 1980’s, the computer industry created another major advancement
with the introduction of UNIX. The concept was to allow
application developers a common operating system with which to
deliver applications. Using the University of Berkeley kernel, several
manufacturer’s including SUN, Digital, IBM, HP, and SCO packaged
additional support function into this kernel to provide higher levels of
reliability, availability and security. During the 1990’s, this evolved into
common, but unique operating systems that minimized the ability for
applications to become portable across different hardware vendors.
Many would argue that UNIX has failed to deliver the true heterogeneous
model it was intended for. The newest attempt is seen in the
LINUX operating system.
The next major breakthrough in computing has been the development of
the Internet as a delivery mechanism for computing. As we have seen,
the Internet has changed everything, and is evolving rapidly as the main
architecture for global communications.
One of the major impacts of Internet usage has been the massive amount
of information that is being gathered and created, i.e. DNA and Genome
mapping and stored on computers. Industry estimates have stated that
the total amount of information in the world will double every 2-3 years.
The effect of this is being seen in the business world with introductions
of Enterprise Resource Management (ERP), Supply Chain Management
(SCM), Customer Relationship Management (CRM), Business Intelligence
(BI), E-Commerce and many other major applications that are
connecting suppliers and customers together.
The cumulative effect of this explosion of computing demand has
caused business and government organizations to begin thinking about
centralization of information technology. IT managers are struggling
with support, costs are increasing and a general feeling that IT is falling
behind, resulting in potential disasters such as the Year 2000 situation
and more recently, Sept. 11 in New York.
There are several bright spots in the technology sector which are just
now being refined and delivered to the marketplace. These include Copper
Chip Technology, Silicon on Insulator (SOI) and Logical
Partitioning (LPAR). Each of these technologies offers substantial
potential in creating the computing architecture that is required to manage
the demand for computing.
IBM has lead the way with the patents and manufacturing development
of Copper and SOI. In 1997, fulfilling a dream of several decades, IBM
introduced a technology that allows chipmakers to use copper wires,
rather than the traditional aluminum interconnects, to link transistors in
chips.
Every chip has a base layer of transistors, with layers of wiring stacked
above to connect the transistors to each other and, ultimately, to the rest
of the computer. The transistors as the first level of a chip are a complex
construction of silicon, metal, and impurities precisely located to
create the millions of minuscule on-or-off switches that make up the
brains of a microprocessor. Aluminum has long been the conductor of
choice, but it will soon reach technological and physical limits of existing
technology. Pushing electrons through smaller and smaller conduits
becomes harder to do – aluminum just isn’t fast enough for these new,
smaller sizes.
Scientists had seen this problem coming for years and tried to find a
way to replace aluminum with the three metals that conduct electricity
better: copper, silver, or gold. Of course, if that was simple, it would
have been done a long time ago. None of those metals is as easy to
work with as aluminum in decreasing amounts. Any new material presents
fresh challenges, and reliably filling submicron channels is a bit
like filling the holes of a golf course from an airplane.
IBM had to develop a diffusion barrier that could buffer silicon wafers
along with the copper. The company has now announced the first commercially
viable implementation of Silicon-on-Wafer (SOI) and the
ability to apply it in building fully functional microprocessors. SOI
refers to the process of implanting oxygen into a silicon wafer to create
an insulating layer and using an annealing process until a thin layer of
SOI is formed. The transistors are then built on top of this thin layer.
SOI technology improves performance over bulk CMOS by 25-35%. It
also brings power usage advantages of 1.7 to 3 times, creates higher
performance and reliability per processor.
computer engineers
Saturday, April 17, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment