買了一本《計算機組成與設計硬件/軟件接口(MIPS版)》,非科班出身的我,從事計算機行業已經8年了,卻對計算機的基礎什麼也不專業。有點慚愧,由於對時間的流逝而感到遺憾!行了,但有言說,多無實義!言歸正傳,看書!思考!git
這本書的英文名稱是《Computer Organization and Design The HardWare / Software Interface》 Fifth Edition Asian Edition, 能夠這樣翻譯《計算機組成與設計——硬件/軟件接口》第5版,亞洲版。做者:David A. Patterson John L. Hennessy程序員
嘿!真想完完整整的把這書從頭至尾的翻譯一遍!看清計算機的真正技術。也爲本身不在下爲了每次下崗而心煩!我買的這本書應該是計算機的基礎書吧!這個問題,我仍是有點不感認同,由於個人計算機職業素質真的沒有。也許除了打字和聊天,我別的防真的沒有好好地想想爲何?人到中年,時間過得如飛通常逝去。但是,本身卻仍是一無所知。web
好了看前言吧!看看這本書講什麼,有什麼能夠從這本書中獲得呢?數據庫
Prefaceexpress
The most beautiful thing we can exprence is the the mysterious. It is the source of all true art and science. Albert Einstein What I believe, 1930 編程
看看這句話,一開篇,就提科學巨人Albert Einstein。 這也許是名人效應吧!但是名言,之因此能成爲名言,也許就是這些高人的生活感悟吧。《What I believe》這是Albert Einstein是的一篇散文吧!翻譯過來也不難,就是:「我所經歷過最美好的事情是神祕事件,是全部真正科學和藝術的源泉」。看這話得多經典。不虧是你們的風範。api
About this book (關於這本書)網絡
We believe learning in computer science and engineering should reflect the current state of the field,as well as introduce the principles that are shaping computing. We also feel that readers in every specialty of computing need to appreciate the organizational paradigms that determine the capabilities, performance, energy, and ultimately, the success of computer systems.架構
Modern computer technology requires professionals of every computing specialty to understand both hardware and software. The interaction between hardware and software at a variety of levels also offers a framework for understanding the fundamentals of computing. Whether your primary interest is hardware or software, computer science or electrical engineering, the central ideas in computer organization and design are the same. Thus, our emphasis in this book is to show the relationship between hardware and software and to focus on the concepts that are the basis for current computers.app
The recent switch from uniprocessor to multicore microprocessors confirmed the soundness perspective, given since the first edition. While programmers could ignore the advice and rely on computer architects, compiler writes, and silicon engineers to make their programs run faster, or be more energy-efficient without change, that era is over. For programs to run faster, they must become parallel. While the goal of many researchers is to make it possible for programmers to be unaware of the underlying parallel nature of the hardware they are programming, it will take many years to realize this vision. Our view is that for at least the next decade, most programmers are going to have to understand the hardware / software interface if they want programs to run efficiently on parallel computers.
The audience for this book includes thoes with little experience in assembly language or logic design who need to understand basic computer organization as well as readers with backgrounds in assembly language and / or logic design who want to learn how to design a computer or understand how a system works and why it performs as it does.
碼完了本書的簡介,可是,不知道該 不應將此書再讀下去,以什麼樣的方式去讀書,亦或怎樣去寫一些筆記。我想用一個翻譯軟件和一些相關的單詞記錄本。千萬不要再產生讀書無用論的想法。人到中年,學點知識充電。翻譯是一件苦差事,幸虧,如今,有一些網上的計算機翻譯軟件。只是,翻譯出的內容有時好像是一個小孩子在玩堆積木,而不是讓一我的去看懂他的思想。字能夠拼接,可是思想要的心照不宣,在於傳遞。我用的翻譯軟件爲http://dictionary.cambridge.org/dictionary/english-chinese-simplified/glacial。自我感受翻譯的還行。言歸正傳,開始翻譯:
關於這本書:
我相信,在計算機科學與工程的學習過程當中,應該折射出該領域的現行狀態,也就是說應該介紹正在造成計原的原理。我也以爲——在計算機專業領域的讀者來講,都應該重視組織結構範式,決定其的能力、性能、能量,及最後,計算機系統的完整性。
現代計算機技術要求每一個計算機專業人員都能理解計算機的硬件和軟件。在不一樣級別的硬件和軟件相互交互,也提供了理解計算機基礎的框架。 不管你的主要興趣愛好是在於硬件仍是軟件,計算機科學或者電子工程,基於計算機組織和設計的核心思想是相同的。所以,在這本書中,咱們強調的是去顯示硬件和軟件之間的關係,而且聚焦於當代計算機的基本概念。自從初版問世以來,最近從單核處理器到多核微處器的轉變證明了穩健性的觀點。然而,程序員可能忽略的衷告,卻依賴於計算機架構師,編譯器編寫人員,和硅工程師使他們的程序運行更快,或者在沒有改變的狀況下更節能,那個時代已經結束。爲了程序運行更快,他們必須變成並列式的。然而,許多研究人員的目標是使程序員不知道它們所編程的硬件的底層並行屬性,但要實現這一願景還須要許多年。咱們的觀點是,至少在接下來的十年中,,絕大多數的計算機程序員必須理解硬件和軟件接口,若是他們想讓程序有序地運行在並行計算機上。
這本書的讀者包括那些,在彙編語言和邏輯設計方面沒有一丁點經驗的程序員,他們須要理解基本的計算機組織;以及具備彙編語言和邏輯設計的讀者,他們想學習如何設計計算機或瞭解系統的工做原理,以及爲何它們必須如此地執行。
好不容易,看完了這一段,好難呀!看似,適合我去讀的一本書,卻在英文上遇到了困難,因此我又購買了兩本英文文法和翻譯的書。想讓本身,夢裏夢活地看懂一本似懂非懂的書,這樣的學習方式一點也不嚴謹!但是,我也沒什麼好的辦法,由於我是非科班出身,充其量也是一個業餘的計算機愛好者吧。這兩本介紹英語的書分別是:《大學英語語法第五版講座與測試》華東理工大學出版社徐廣聯主編,和《英語用法指南第三版》。投資了許多錢去買書,也花費了許多時間,只但願本身成爲一個專業的人。
20180103午後,剛睡來,精神可佳。但願本身好好地學習,若是再來上一次劃轉分流,我還能作什麼?本身還年輕,學點東西!強大本身的腦子。
About the other Book
Some readers may be familiar with Computer Architecture: A Quantitative Approach, popularly known as Hennessy and Patterson. (This book in turn is often called Patterson and Hennessy.) Our motivation in writing the earlier book was to describe the principles of computer architecture using solid engineering fundamentals and quantitative cost/performance tradeoffs. We used an approach that combined examples and measurements, based on commercial systems, to create realistic design experiences. Our goal was to demonstrate that computer architecture could be learned using quantitative methodologies instead of a descriptive approach. It was intended for the serious computing professional who wanted a detailed understanding of computers.
A majority of the readers for this book do not plan to become computer architects. The performance and energy efficiency of future software systems will be dramatically affected, however, by how well software designers understand the basic hardware techniquies at work in a system. Thus, compiler writers,operating system designers, database programmers, and most other software engineers need a firm grounding in the principles presented in this book. Similarly, hardware designers must understand clearly the effects of their work on software application.
Thus, we know that this book had to be much more than a subset of the material in Computer Architecture, and the material was extensively revised to match the different audience. We were so happy with the result that the subsequent editions of Computer Architecture were revised to remove most of the introductory material; hence,there is much less overlap today than with the first editions of both books.
這是我第一次看到,一本書的前言中,有論及其它書的內容。做者是怎麼想的,是想多賣幾本書,仍是以爲內容相關。好了無論那麼多。在本書的後面有一個推薦閱讀的書系中提到了這本書。《計算機體系結構,量化研究方法》(英文版第五版。)
關於其它的書:
一些讀者可能熟悉《計算機體系架構:量化研究方法》,俗稱Hennessy & Patterson. (這本書又常常被稱爲 Patterson & Hennessy )。 前幾版書的寫做過程當中,咱們的初衷是用紮實的工程基礎和量化式的性價比來描述計算機體系架構的原理。咱們使用一種方法:結合實例和量化,基於一個商業系統,爲了創造一種逼真的設計體驗。咱們的目標是去展現:使用量化方式學習計算機體系架構代替描述性的方法。這是爲了那些想要詳細瞭解計算機的專業計算機人士而設計的。
本書的讀者中大部分並不想成爲計算機架構師,將來軟件系統的性能和資源有效利有率將受到顯著的影響,毫無疑問,軟件設計者應該對系統所工做的基本硬件技術有所瞭解。所以,編譯器的編寫人員,操做系統的設計者,數據庫程序員,以其絕大多樣的軟件工程師需要一個牢固的基礎:本書中所體現的原理。一樣,硬件設計人員必須清楚地理解:他們的工做對軟件應用程序的影響。
所以,咱們知道:這本書不僅僅是《計算機體系架構》中諸多材料中的一個子集,並且,對這些材料進行了普遍的修改,以適應不一樣的讀者。咱們對結果很是滿意,《計算機體系架構》的後續版本已經修改、刪除了絕大部分介紹性的材料;所以,和初版相比,今天的後續版本不多有重複的部分。
About the Asian Edition
With the consent of the authors, we have developed this Asian Edition of Computer Organization and Design: The Hardware / software Interface, to better reflect local teaching practice of computer course in Asian classrooms and development of computer technology in this region. The major adjustment content include:
# An introduction to the "TH-2 High Performance Computing system" ( as a demonstration of cluster computing system) to replace Appendix A on digital logical, and a new section on " Networks-on-Chip" as Appendix F. Both reflect the lastest progress in computer technology and can serve as good reference for readers.
# Abridgment of some sections of Chapter 2 to better suit the current curriculums applied in Asian classrooms.
With these adjustments listed above, the Asian Edition is enhanced with local features while keeping the main structure and knowledge framework of the original version.
Special thanks go to Prof. Zhiying Wang, Prof. Chung-Ping Chung, Associate Prof. Li Shen and Dr. sheng Ma, for their contributions to the development of this Asian Edition.
關於亞洲版:
在做者的贊成下,咱們開發了這個亞洲版本《Computer Organization and Design: The Hardware / software Interface》爲了更好地反映關於當地計算機課程教學實踐在亞洲課堂上,也開發了與該地區對應的計算機技術。主要調整內容包括:
#介紹了「TH-2高性能計算機系統」(做爲集羣計算機系統的演示)取代了附錄A 數字邏輯的內容。還新增長了「網絡級芯片」做爲附錄F。二者反映了最早進的計算機技術,可提供做者良好的參考。
#刪除了第二章中的一些章節, 以便更適合亞洲課堂的課程風格。
經過上述調整,亞洲版保持了原版的主要結構和知識框架,加強了本地特點。
特別鳴謝 Prof. Zhiying Wang,Prof. Chung-Ping Chun,還有Prof. Li Shen 和Dr. sheng Ma 這版亞洲版所作的貢獻。
Changes for the Fifth Edition
We had six major goals for the fifth edition of Computer Organization and Design: demonstrate the importance of understanding hardware with a running example; highlight major themes across the topics using margin icons that are introduced early; update examples to reflect changeover from PC era to PostPC era; spread the material on I/O throughout the book rather than isolating it into a single chapter; update the technical content to reflect changes in the industry since the publication of the fourth edition in 2009; and put appendices and optional sections online instead of including a CD to lower costs and to make this edition viable as an electronic book.
第五版的變革
在《計算機組織和設計》第五版中,咱們有六個主要的目標:經過一個正在運行中的實例,展現理解硬件的重要性。使用先前介紹過頁面邊距圖標來突顯重要的主題貫穿話題,更新實例,折射出從PC時代到後 PC時代。輸入輸出方面的材料貫穿全書,而不是隔離分開使它成爲獨立的一章。 在2009年,自從第四版問世以來,更新了折射該行業變化的技術內容;將可選章節和附錄放在線上,代替了一張CD,以下降成本,而成爲本版的電子書。
Chapter or Appendix | Sections | Software focus | Hardware focus | comments |
1. Computer Abstractions and Technology | 1.1 to 1.11 | |||
☯ 1.12 (History) | ||||
2. Instructions: Language of the computer | 2.1 to 2.12 | |||
☯ 2.13 ( Compilers & Java ) | ||||
2.14 to 2.18 | ||||
☯ 2.19 ( history ) | ||||
E. RISC Instruction-Set Architecture | E1 to E7 | |||
3. Arithmetic for Computers | 3.1 to 3.5 | |||
3.6 to 3.8 ( subword Parallelism ) | ||||
3.9 to 3.10 ( Fallacies ) | ||||
☯ 3.11 ( history ) | ||||
2.11 ( History ) | ||||
4. The Processor | 4.1 ( Overview ) | |||
4.2 ( Logic Conventions ) | ||||
4.3 to 4.4 ( Simple Implementation ) | ||||
4.5 ( Pipelining Overview ) | ||||
4.6 ( Pipelining Datapath ) | ||||
4.7 to 4.9 ( Hazards, Exceptions ) | ||||
4.10 to 4.12 ( Parallel, Real Stuff ) | ||||
☯ 4.13 ( Verilog Pipeline Control ) | ||||
4.14 to 4.15 ( Fallacies ) | ||||
☯ 4.16 ( History ) | ||||
D. Mapping Control to Hardware | D1 to D6 | |||
5. Large and Fast: Exploiting Memory Hierarchy | 5.1 to 5.10 | |||
☯ 5.11 ( Redundant Arrays of Inexpensive Disk ) | ||||
☯ 5.12 ( Verilog Cache Controller) | ||||
5.13 to 5.16 | ||||
☯ 5.17 ( History ) | ||||
6. Parallel Process from Client to cloud | 6.1 to 6.8 | |||
☯ 6.9 ( Network ) | ||||
6.10 to 6.14 | ||||
☯ 6.15 ( History ) | ||||
A. Assembles, linkers, and to cloud | A.1 to A.11 | |||
C. Graphics Processor Units | C.1 to C.10 |
Read careful(仔細讀) Read if have time (若是有時間,就讀) Reference(參考) Review or read (瀏覽或閱讀) Read for culture (文化類閱讀)
Before discussing the goals in details, let's look at the table on page vii. It shows the hardware and software paths throught the material. Chapter 1, 4, 5, and 6 are found on both paths, no matter what the experience or the focus. Chapter1 discusses the importance of energy and how it motivates the switch from signle core to multicore microprocessors and introduces the eight great ideas in computer architecture. Chapter 2 is likely to be review material for the hardware-oriented, but it is essential reading for the software-oriented, especially for those readers interested in learning more about compilers and object-oriented programming languages. Chapter 3 is for readers interested in constructing a datapath or in learning more about floating-point arithmetic. Some will skip parts of Chapter 3, either because they don't need them or because they offer a review. However, we introduce the running example of matrix multiple in this chapter, showing how subword parallels offers a fourflod improvement , so don't skip section 3.6 to 3.8. Chapter 4 explains pipelined processors. Section 4.1, 4.5 and 4.10 give overviews and Section 4.12 gives the next performance boosts for matrix multiply for those with a software focus. Those with a hardware focus, however, will find that this chapter presents core material; they may also, depending on their background, want to read Appendix C on logic design first. The last chapter on multicores, multiprocessors, and clusters, is mostly new content and should be read by everyone. It was significantly reorganized in this edition to make the flow of ideas more natural and to include much more depth on GPUs, warehouse scale computers, and the hardware-software interface of network interface cards that are key to clusters.
The first of the six goals for this fifth edition was to demonstrate the importance of understanding modern hardware to get good performance and energy efficiency with a concrete example. As mentioned above, we start with subword parallelism in Chapter 3 to improve matrix multiply by a factor of 4. We double performance in Chapter 4 by unrolling the loop to demonstrate the value of instruction level parallelism. Chapter 5 doubles performance again by optimizing for caches using blocking. Finally, Chapter 6 demonstrates a speedup of 14 from 16 processor by using thread-level parallelism. All four optimizations in total add just 24 lines of C code to our intitial matrix multiply example.
The second goal was to help readers separate the forest from the trees by identifying eight great ideas of computer architecture early and then pointing out all the places throughout the rest of the book. We use (hopefully) easy to remember margin icons and highlight the corresponding word in the text to remind readers of these eight themes. There are nearly 100 citations in the book. No Chapter has less than seven examples of great ideas, and no idea is cited less than five times.Performance via parallelism, pipelining, and prediction are the three most popular great ideas, followed closed by Moore's Law. The processor chapter (4) is the one with the most examples, which is not a surprise since it probably received the most attention from computer architects. The one great idea found in every computer is performance via parallelism, which is a pleasant observation given the recent emphasis in parallelism in the field and in editions of this book.
The third goal was to recognize the generation change in computing from the PC era to the PostPc era by this edition with our examples and material. Thus, Chapter 1 dives into the guts of tablet computer rather than a PC, and Chapter 6 describes the computing infrastructure of the cloud. We also feature the ARM, which is the instructions set of choice in the personal mobile devices of the PostPC era, as well as the x86 instruction set that dominated the PC Era and (so far) dominates cloud computing.
The fourth goal was to spread the I/O material throughout the book rather than have it in its own chapter, much as we spread parallelism throughout all the chapters in the fourth edition. Hence, I/O material in this eidtion can be found in Sections 1.4, 4.9, 5.2, 5.5, 5.11, and 6.9. The throught is that readers (and instructors) are more likely to cover I/O if it's not segregated to its own chapter.
This is a fast-moving field, and, as is always the case for our new editions, an important goal is to update the technical content. The running example is the ARM Cortex A8 and Intel Core i7, reflecting our PostPC Era. Other highlights include an overview the new 61-bit instruction set of ARMv8, a tutorial on GPUs that explains their unique terminology, more depth on the warehouse scale computers that make up the cloud, and a deep dive into 10 Gigabyte Ethernet cards.
To keep the main book short and compatible with electronic book, we placed the optional material as online appendices instead of on a companion CD as in prior editions.
Finally, we updated all the excises in this book.
While some elements changed, we have preserved useful elements from prior editions. To make the book better as a reference, we still place definitions of new terms in the margins at their first occurrence. The book element called "understanding Program Performance" sections helps readers understand the performance of their programs and how to improve it, just as the "Hardware / Software Interface " book element helped readers understand the tradeoffs at this interface. "The Big Picture" sections remains so that the reader see the forest despite all the trees. "Check Yourself" sections help readers to confirm their comprehension of the material on the first time through with answers provided at the end of each chapter. This edition still includes the green MIPS refernce card, which was inspired by the "Green card" of IBM System / 360. This card has been updated and should ne a handy reference when writting MIPS assembly programs.
Changes for the Fifth Edions
We have collected a great deal of material to help instructions teach courses using this book. Solutions to exercises, figures from the book, lecture slides, and other materials are available to adopters from the publisher. Check the publisher's web site for more information:
textbook.elsevier.com /9780124077263
Concluding Remarks
If you read the following acknowleagements section, you will see that we went to great leghts to correct mistakes. Since a book goes through many printings, we have the opportunity to make even more corrections. If you uncover any remaining, resilient bugs, please contact the publisher by electronic mail at cod5asiabugs@mkp.com or by low-tech mail using the address found on the copyright page.
This edition is the second break in the long-standing collaboration between Hennessy and Patterson, which started in 1989. The demands of running one of the world's great universities meant that President Hennessy could not longer make the substantial commitment to create a new edition. The remaining author felt once again like a tightrope walker without a safety net. Hence, the people in the acknowledgments and Berkeley colleagues played an even larger role in shaping the contents of this book. Neverthless, this time around there is only one author to blam for the new material in what you are about to read.
Acknowledgments for the Fifth Edition
With every editoin of this book, we are very fortunate to receive help from many readers, reviewers, and contributors. Each of these people has helped to make this book better.
Chapter 6 was so extensively revised that we did a separate review for ideas and contents, and I mad changes based on the feedback from every reviewer. I'd like to thank Christos Kozyrakis of Stanford University for suggesting using the network interface for clusters to demonstrate the hardware-software interface of I/O and for suggestions on organizing the rest of the chapter; Mario Flagsilk of Stanford University for providing details,diagrams, and performance measurements of the NetFPGA NIC; and the following for suggestions on how to improve the chapter: David Kaeli of Northeastern University, Partha Ranganathan of HP Labs, David Wood of the University of Wisconsin, and my Berkeley colleagues Siamak Faridai,Shoaib Kamil, Yunsup Lee, Zhangxi Tan, and Andrew Waterman.
Special thanks goes to Rimas Avizenis of UC Berkeley, who developed the various versions of matrix multiply and supplied the performance numbers as well. As I worked with his father while I was a graduate student at UCLA, it was a nice symmetry to work with Rimas at UCB.
I also wish to thank my longtime collaborator Randy katz of UC Berkeley, who helped develop the concept of great ideas in computer architecture as part of the extensive revision of an undergraduate class that we did together.
I'd like to thank David Kirk, John Nickolls, and their colleagues at NVIDIA (Michael Garland, John Montrym, Doug Voorhies, Lars Nyland, Erik Lindholm, Paulius Micikevicius, Massimiliano Fatica, Stuart Oberman, and vasily Volkov) for writing the first in-depth appendix on GPUs. I'd like express again my appreciation to Jim Larus, recently named Dean of the school of Computer and communications Science at EPFL, for his willingness in contributing his expertise on assembly language programming, as well as for welcoming readers of this book with regard to using the simulator he developed and maintains.
I am also very grateful to Jason Bakos of the University of South Carolina, who updated and created new exercises for this edition, working from originals prepared for the fourth edition by Perry Alexander (The University of Kansas); Javier Bruguera (Universidade de santiago de Compostela); Matthew Farrens (University of California, Davis); David Kaeli (Northeastern University); Nicole kaiyan (University of Adelaide); John Oliver (Cal Poly, san Luis Obispo); Milos Prvulovic (Georgia Tech); and Jichuan Chang, Jacob Leverich, Kevin Lim, and Partha Ranganathan (all from Hewlett-packed).
Additional thanks goes to Jason Bakos for developing the new slides.
I am grateful to the many instructions who have answered publishes surveys, reviewed our proposals, and attended focus groups to analyze and respond to our plans for this edition. They include the following individuals: Focus Groups in 2012: Bruce Barton (suffolk County Community College), Jeff Braun (Montana Tech), Ed Gehringer (North Carolina State), Michael Goldweber (Xavier University), Ed Harcourt (St. Lawrence University), Mark Hill (University of Wisconsin, Madison), Patrick Homer (University of Arizona), Norm Jouppi (HP Labs), Dave Zachary Kurmas (Grand Valley State University), Jae C. Oh (Syracuse University), Lu Peng (LSU), Milos Prvulovic (Georiga Tech), Partha Ranganathan (HP Labs), David Wood (University of Wisconsin), Crig Zilles (University of Illinois at Urbana-Champaign), Surveys and Reviews: Mahmound Abou-Nasr (Wayne State University), Perry Alexander (The University of Kansas), Hakan Aydin(George Mason University), Hussein Badr (State University of New York at Stony Brook), Mac Baker (Virginia Military Institute), Ron Branes (George Mason University), Douglas Blough (Georgia Institute of Technology), Kevin Bolding (Seattle Pacific University), Miodrag Bolic (Universty of Ottawa), John Bonomo (Westminster College), Jeff Braun (Montana Tech), Tom Briggs (Shippensburg University), Scott Burgess (Humboldt State University), Fazli Can (Bilkent University), Warren R. Carithers (Rochester Institute of Technology), Bruce Carlton (Mesa Community College), Nicholas Carter (University of Illinois at Urbana-Champaign), Anthony Cocchi (The City University of New York), Don Cooley (Utah State University), Robert D. Cupper (Allegheny College), Edward W.Davis (North Carolina State University), Nathaniel J. Davis (Air Force Institute of Technology), Molisa Derk (Oklahoma City University), Derek Eager (University of Saskatchewan), Ernest Ferguson (Northwest Missouri State University), Rhonda Kay Gaede (The University of Alabama), Etienne M. Gagnon (UQAM), Costa Gerousis (Christopher Newport University), Paul Gillard (Memorial University of Newfoundland), Michael Goldweber (Xavier University), Georgia Grant (College of San Mateo), Merrill Hall (The Master's College), Tyson Hall (Southern Adventist University), Ed Harcourt (St. Lawrence University), Justin E. Harlow (University of south Florida), Paul F. Hemler (Hampden-Sydeny College), Kenneth Hopkinson (Cornell University), Steve J. Hodges (Cabrillo College), Kenneth Hopkinson (Cornell University), Dalton Hunkins (St. Bonaventure University), Baback Izadi (State University of New York--New Paltz), Reza Jafari, Robert W. Johnson (Colorado Technical University), Bharat Joshi (University of North Carolina, Charlotte), Nagarajan Kandasamy (Drexel University), Rajiv Kapadia, Ryan Kastner (University of California, santa Barbara), E. J. Kim (Texas A & University), Jihong Kim (Seoul National University), Jim Kirk (Union University), Geoffrey S. Knauth (Lycoming College), Manish M.Kochal (Wayne State), Suzan Koknar-Tezel (Saint Joseph's University), Angkul Kongmunvattana (Columbus State University), April Kontostathis (Ursinus College), Christos Kozyrakis (Stanford University), Danny Krizanc (Wesleyan University), Ashok Kumar, S. Kumar (The University of Texas), Zachary Kurmas (Grand Vally State University), Robert N.Lea (University of Houston), Baoxin Li (Arizona State University), Li Liao (University of Delaware), Gary Livingston (University of Massachusetts), Michael Lyle, Douglas W. Lynn (Oregon Institute of Technology), Yashwant K Malaiya (Colorado State University), Bill Mark (University of Texas at Austin), Ananda Mondal (Claflin University), Alvin Moser Neebel (Loras College), John Nestor (Lafayette College), Jae C. Oh (Syracuse University), Joe Oldham (Centre College), Timour Paltashev, James Parkerson(University of Arkansas), Shauak Pawagi (SUNY at Stony Brook), Steve Pearce, Ted Pedersen (University of Minnesota), Lu Peng (Louisiana State University), Gregory D Peterson (The University of Tennessee), Milos Prvulovic (Georgia Tech), Partha Ranganathan (HP Labs), Dejan Raskovic(University of Alaska, Fairbanks) Brad Richards (University of Puget Sound), Roman Rozanov , Louis Rubinfield (Villanova University), Md Abdus Salam (southern University), Augustine Samba (Kent state University), Robert Schaefer (Daniel Webster College), Carolyn J. C. Schauble (Colorado state University), Keith Schubert (CSU San Bernardino), William L. Schultz, Kelly Shaw (University of Richmond), shahram shirani (McMaster University), Scott Sigman (Drury University), Bruce smith, David Smith, Jeff W. (University of Gerogia, Athens), Mark Smotherman (Clemson University), Philip Snyder (Johns Hopkins University), Alex Sprintson (Texas A&M), Timothy D. Stanley (Brigham Young University), Dean Stevens (Morningside Collage), Nozar Tabrizi (Kettering University), Yuval Tamir (UCLA), Alexander Taubin (Boston University), will Thacker (Winthrop University), Mithuna Thottethodi (UC San Diego), Rama Viswanathan (Beloit College), Ken Vollmar (Missouri State University), Guoping Wang (Indiana-Purdue University), Patricia Wenner (Bucknell University), Kent Wilken (University of California, Davis), David Wolfe (Gustavus Adolphus College), David Wood (University of Wisconsin, Madiscon), Ki Hwan Yum (University of Texas, San Antonio), Mohamed Zahran (City College of New York), Gerald D. Zarnett (Ryerson University), Nian Zhang (South Dakota School of Mines & Technology), Jiling Zhong (Troy University), Huiyang Zhou (The University of Central Florida), Weiyu Zhu (Illinois Wesleyan University).
A special thanks also goes to Mark Southerman for making multiple passes to find technical and writing glitches that significantly improved the quality of this edition.
We wish to thank the extended Morgan Kaufman family for agreeing to publish this book again under the able leadership of Tod Green and Nate McFadden: T certainly couldn't have completed the book without them. We also want to extend thanks to Lisa Jones, who managed the book production process, and Russell Purdy, who did the cover design. The new cover cleverly connects the PostPC Era content of this edition to the cover of the first edition.
The contributions of the nearly 150 people we mentioned here have helped make this fifth edition what I hope will be our best book yet. Enjoy!
David A. Patterson.
About the Author
David A. Patterson. has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1997, where he holds the pardee Chair of Computer Science. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silion Valley Engineering Hall of Frame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS divivsion in the Berkeley EECS department, as chair of the computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM to CRA.
At Berkeley, Patterson led the design and implementation of RISC I , likely the first VLSI reduced instruction set computer, and the foundation of the commercial SPARC architecture. He was a leader of the Redundant Array of Inexpensive Disks(RAID) project, which led to developable storage systems from many companies. He was also involved in the Network of workstation (NOW) project, which led to cluster technology used by Internet companies and later to cloud computing. These projects earned three dissertation awards from ACM. His current research projects are Algorithm-Machine-People and Algorithms and Specializers for Provably Optimal Implementations with Resilience and Efficiency. The AMP Lab is developing scalable machine learning algorithms, warehouse-scale-computer-friendly programming models, and crowd-sourcing tool to gain valuable insights quickly from big data in the cloud. The ASPIRE Lab uses deep hardware and software co-tuning to achieve the hightest possible performance and energy efficiency for mobile and rack computing systems.
John L. Hennessy is the tenth president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neuman Award, which he shared with David Pattson. He has also received seven honorary doctorates.
In 1981, he started the MIPS project at Stanford with a handful of graduate students. After completing the project in 1984, he took a leave from the university to cofound MIPS Computer System (now MIPS Technologies), which developd one of the first commercial RISC microprocessors. As of 2006, over 2 billion MIPS microprocess have been shiped in devices ranging from Video games and palmtop computers to laser printers and network switches. Hennessy subsequently led the DASH (Director Architecture for shared Memory) project, which prototyped the first scalable cache coherent multiprocessor; many of the key ideas have been adopted in modern multiprocessors. In addition to his technical activities and university responsibilites, he has continued to work with numerous start-up both as an early-stage advisor and an investor.