Præsentation er lastning. Vent venligst

Præsentation er lastning. Vent venligst

IT Arkitektur og Sikkerhed

Lignende præsentationer


Præsentationer af emnet: "IT Arkitektur og Sikkerhed"— Præsentationens transcript:

1 IT Arkitektur og Sikkerhed
Grundlæggende – Computere

2 Sidste uge Sidste uge gennemgik vi
Introduktion til grundlæggende netværk Netværk OSI TCP/IP Routing

3 Dagsorden I denne uge gennemgår vi OS Filsystemer RAID SAN NAS

4 Næste uge I næste uge gennemgår vi IT Arkitektur
Single/Multi-tier Computing Server-based Computing Client/Server Computing Mainframes Citrix Enterprise netværksdesign

5 Ændring til Lektionerne
Lektion 6 bliver ændret. Det betyder at prøveeksamen bliver lagt efter Lektion 5 og Kryptering og Enterprise sikkerhedsmodeller bliver gennem i Lektion 7 sammen med øvrigt materiale i Lektion 7. Torsdag den 15. marts 2007 er der mulighed for at deltage i et spændende heldags arrangement sammen med kursister fra University of Virginia i U.s.a. Arrangementet er gratis! I forvejen får I udleveret en case om et amerikansk og et skandinavisk firma der skal fusionere, og herunder vælge et fælles ERP-system. I bliver sat sammen i grupper på ca. 3 personer. Hver gruppe repræsenterer et skandinavisk firmas IT-ledelse. Amerikanerne bliver delt i tilsvarende grupper. Hver gruppe skal nu sammen med den skandinaviske gruppe danne et team. De skal diskutere problemer og behov, og foreslå en plan for hvordan man vælger et fælles system og får det i luften. Der er en del konfliktstof i casen fordi behovene og kulturerne er forskellige. 9:00-10:20 Foredrag om Global Sourcing ved to af deres medarbejdere, Ryan Nelson og Mike Morris 10:30-12:00 Foredrag om Global Sourcing hos Mærsk ved Michael Laursen 12:00-13:00 Frokost hvor hvert team spiser sammen. 13:00-17:00 Gruppearbejde og plenumdiskussioner i to runder, hvor man også får lejlighed til at sammenligne sin egen plan med hvad der skete i den virkelige case. 17:00-18:00 Drinks 18:00-20:00 Middag

6 Hvad er et Operativ System (OS)?
Et program der bliver startet af BOOT processen. Et program der tilgås via. Et applikationsprogram interface (API) En brugergrænseflade (GUI) Styrer brugen af CPU’en, herunder multi-tasking af applikationer Styrer brugen af den interne hukommelse i systemet Styrer input til og output fra tilknyttet hardware; såsom diske, printere, m.m. Sender beskeder til applikationer og brugere om status på operationer der udføres, og eventuelle fejl der sker.

7 OS er lagdelt

8 Hvor bliver OS brugt? Flere og flere steder… På desktop og servere
MAC OSX Server Windows NT, 2000, XP, 2003 og VISTA BSD Linux varianter; Kommercielle såvel som Open Source Novell/SuSE (OpenSuSE) RedHat (Fedora) Debian Ubuntu Gentoo Kommercielle UNIX varianter: Solaris (BSD), AIX (AT&T), HPUX (AT&T) Andre; OpenVMS, OS/400, m.fl. På netværksudstyr Routere

9 Hvor bliver OS brugt? PDA’er Mobiltelefoner Spillekonsoller Andet
PalmOS Windows Mobile Embedded Linux Mobiltelefoner Symbian OS Spillekonsoller Xbox, Xbox360 PSP, PS2, PS3 Andet Biler Lyd & Billede

10 UNIX Ken Thompson starter med at arbejde på UNIX i 1969
Bill Joy starter med at arbejde på BSD i 1976 Avie Tevanian starter med at arbejde på MACH i 1985 Steve Jobs starter med at arbejde på NextStep i 1985 Richard Stallman starter med at arbejde på GNU i 1984 Linus Thorvaldsen starter med at arbejde på Linux i 1991

11 BOOT processen Eksempel, PC
Når maskinen startes, initieres basic input-ouput system (BIOS) der er gemt på systemets read-only memory (ROM). BIOS udfører først en POST check for at sikre at systemets komponenter er tilstede og virker. BIOS er konfigureret til at vide hvor den skal finde OS. Normalt kigger den på disk, og herefter på CD-ROM. Rækkefølgen kan ændres. Når BIOS har bestemt hvor OS er, indlæser den første sektor (512-byte) på disken med Master Boot Record (MBR) MBR starter OS setup, og henter kærnen af OS ind i systemets hukommelse.

12 Kærnen Kærnen ”kernel” er den inderste og grundlæggende del af OS, som bliver startet af BOOT processen og lagt ind i hoved hukommelsen. Kærnen er ALTID i hoved hukommelsen. Det varierer fra OS til OS hvad kærnen indeholder.

13 Kærnen (”XNU”) I vores MAC OSX eksempel er kærnen opbygget af BSD og MACH. Mach sørger for Multitasking Håndtering af hukommelse Håndtering af interrupts Real-time support Kernel debugging I/O kit (oo) BSD sørger for Proces kontrol Basal sikkerhed (brugeradgang) POSIX API TCP/IP, BSD sockets, Firewall VFS filsystem Kryptering

14 Øvrig OS Darwin (Open Source) består af XNU System Utilities Øvrig OS
C compiler Application Libraris Graphical Libraries (Carbon, Quicktime, Quartz, m.m.) Java Runtime Java Virtual Machine GUI

15 Typer af kærner Monolitiske kærner
Hele kærnen kører i hukommelsen og udstiller alle systemkald til services såsom netværk, process styring, hukommelsesstyring m.m. Det betyder i teorien at alt funktionalitet i kærnen bliver initieret ved systemstart. Moderne monolitiske kærner understøtter dog loadable modules dynamisk kan hentes ind i kærnen. Eksempler: DOS, Linux, BSD, Solaris m.fl. A monolithic kernel is a kernel architecture where the entire kernel is run in kernel space in supervisor mode. In common with other architectures (microkernel, hybrid kernels), the kernel defines a high-level virtual interface over computer hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in one or more modules.[citation needed] Even if every module servicing these operations is separate from the whole, the code integration is very tight and difficult to do correctly, and, since all the modules run in the same address space, a bug in one module can bring down the whole system. However, when the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively utilized, making a good monolithic kernel highly efficient. In a monolithic kernel, all the systems such as the filesystem management run in an area called the kernel mode

16 Typer af kærner Mikrokærner
En minimal kærne kører i hukommelsen og udstiller kun de mest basale systemkald til services såsom process styring, hukommelsesstyring m.m. Andre services der ellers ville være forventet i kærnen leveres af programmer uden for kærnen kaldt servers. Mikrokærner er blevet interessante i de senere år pg.a. sikkerhed. Eksempler: AmigaOS, SymbianOS m.fl. In 2006 the debate about the potential security benefits of the microkernel design has increased[3]. Many attacks on computer systems take advantage of bugs in various pieces of software. For instance, one of the common attacks is the buffer overflow, in which malicious code is "injected" by asking a program to process some data, and then feeding in more data than it stated it would send. If the receiving program does not specifically check the amount of data it received, it is possible that the extra data will be blindly copied into the receiver's memory. This code can then be run under the permissions of the receiver. This sort of bug has been exploited repeatedly, including a number of recent attacks through web browsers. To see how a microkernel can help address this, first consider the problem of having a buffer overflow bug in a device driver. Device drivers are notoriously buggy[4], but nevertheless run inside the kernel of a traditional operating system, and therefore have "superuser" access to the entire system[5]. Malicious code exploiting this bug can thus take over the entire system, with no boundaries to its access to resources [6]. For instance, under open-source monolithic kernels such as Linux or the BSDs a successful attack on the networking stack over the internet could proceed to install a backdoor that runs a service with arbitrarily high privileges, so that the intruder may abuse the infected machine in any way[7] and no security check would be applied because the rootkit is acting from inside the kernel. Even if appropriate steps are taken to prevent this particular attack[8], the malicious code could simply copy data directly into other parts of the kernel memory, as it is shared among all the modules in the kernel. A microkernel system is somewhat more resistant to these sorts of attacks[9] for two reasons. For one, an identical bug in a server would allow the attacker to take over only that program, not the entire system; in other words, microkernel designs obey the principle of least authority. This isolation of "powerful" code into separate servers helps isolate potential intrusions, notably as it allows a CPU's memory management unit to check for any attempt to copy data between the servers.

17 Typer af kærner Hybride kærner
Kombinerer elementer fra monolitiske- og mikrokærner. Ideen er at have en kærne lig en mikrokærne, men implementeret som en monolitisk kærne. Alle servers kører i kærnen. Eksempler: MAC OSX, Windows NT, 2000, 2003, XP & VISTA

18 OS bloat Der har gennem tiden været en tendens, startende fra BSD med at inkludere flere og flere services til OS.. OS Bloats. Der kører heftige debatter om hvilke kærne typer der er bedst. Den voldsomeste og længstlevende debat er mellem Andrew S. Tanenbaum og Linus Torvalds Google: The Tanenbaum-Torvalds Debate Op. Sys. SLOC Windows NT 16 millioner Red Hat Linux 7.1 30 millioner Windows 2000 29 millioner Debian 3.1 213 millioner Windows XP 40 millioner Sun Solaris 7.5 millioner Windows VISTA 50 millioner MAC OS X 10.4 86 millioner Linux kernel 2.6 6 millioner The Tanenbaum-Torvalds debate is a debate between Andrew S. Tanenbaum and Linus Torvalds, regarding Linux and kernel architecture in general. Tanenbaum began the debate in 1992 on the Usenet discussion group comp.os.minix,[1] Tanenbaum arguing that microkernels are superior to monolithic kernels and that, for this reason, Linux is obsolete. The debate was not restricted to just Tanenbaum and Torvalds, as it was on a Usenet group; other notable hackers such as Ken Thompson (one of the founders of Unix) and David Miller joined in as well. Due to the strong tone used in the newsgroup posts, the debate has widely been recognized as a “flame war”, a deliberately hostile exchange of messages, between the two camps (of Linux and MINIX, or alternatively, of monolithic kernel enthusiasts and microkernel enthusiasts) and has been described as such in various publications.[2] Torvalds himself also acknowledged this in his first newsgroup post about the issue, stating (verbatim) “I'd like to be able to just 'ignore the bait', but ... Time for some serious flamefesting!”[3] This subject was revisited in 2006, again with Tanenbaum as initiator, after he had written a cover story for Computer magazine titled “Can We Make Operating Systems Reliable and Secure?”[4] While Tanenbaum himself has mentioned that he did not write the article for the purpose of entering a debate on kernel design again,[5] the juxtaposition of the article and an archived version of the 1992 debate on the technology site Slashdot caused the subject to be rekindled.[6] After Torvalds posted a rebuttal of Tanenbaum's arguments via an online discussion forum,[7] several technology news sites began reporting the issue.[8]

19 Modes og Processer Modes Processer En proces består af fem dele
For at udføre specielle funktioner vil programmer i user mode lave system kald til kærnen i kernel/supervisor mode hvor trusted code udfører funktionerne. Processer En proces er en instans af et kørende program. En proces består af fem dele En kopi af koden i programmet Hukommelse (real memory eller virtual memory) der indeholder koden, og proces specifik data OS ressourcer (descriptors) der er allokeret til processen Sikkerheds attributter, såsom proces ejer og proces rettigheder Processens kontekst In computer terms, supervisor mode (sometimes called kernel mode) is a hardware-mediated flag which can be changed by code running in system-level software. System-level tasks or threads will have this flag set while they are running, whereas user-space applications will not. This flag determines whether it would be possible to execute machine code operations such as modifying registers for various descriptor tables, or performing operations such as disabling interrupts. The idea of having two different modes to operate in comes from "with more control comes more responsibility" - a program in supervisor mode is trusted never to fail, because if it does, the whole computer system may crash. In general, a computer system process consists of (or is said to 'own') the following resources: An image of the executable computer code associated with a program. Memory (typically some region of virtual memory and/or real memory), which contains the executable code and process-specific data, including initial, intermediary, and final products. Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix terminology) or handles (Windows). Security attributes, such as the process owner and the process' set of permissions. Processor state (context), such as the content of registers, physical memory addressing, etc. The state is typically stored in computer registers when the process is executing, and in memory otherwise. Any subset of resources, but typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or 'daughter' processes.

20 Multitasking For at flere processer kan køre samtidigt og deles om de samme ressourcer, såsom CPU, er der behov at multitaske. CPU’en kan kun give opmærksomhed til én proces ad gangen, d.v.s. at CPU’en aktivt udfører instruktioner for denne proces. Med multitasking skemalægges hvilken proces der får opmærksomhed hvornår, og hvornår den næste proces for opmærksomhed Det kaldes context switch når CPU’en skifter opmærksomhed fra en proces til en anden. Hvis context switching sker hurtigt nok, virker det som om processerne kører i parallel Selv med computere med flere CPU’er (multiprocessor maskiner) hjælper multi-tasking med at køre flere processer end der er CPU’er In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs. Operating systems may adopt one of many different scheduling strategies, which generally fall into the following categories: * In multiprogramming systems, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage. * In time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow several programs to execute apparently simultaneously. * In real-time systems, some waiting tasks are guaranteed to be given the CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing. The term time-sharing is no longer commonly used, having been replaced by simply multitasking.

21 Multithreading Multitasking lader programmørerne udvikle programmer der kører i flere samtidige processer (eksempelvis en til at samle data, en til at behandle data, en til at skrive resultatet til disk). Det kræver at flere programinstanser kan tilgå en process samtidigt. En Thread er en mappe for information som er tilknyttet én programinstans i en process, d.v.s. at der kan findes flere threads under en process, dette kaldes Multithreading As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e.g. one process gathering input data, one process processing input data, one process writing out results on disk.) This, however, required some tools to allow processes to efficiently exchange data. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are basically processes that run in the same memory context. Threads are described as lightweight because switching between threads does not involve changing the memory context. Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. Each user request for a program or system service (and here a user can also be another program) is kept track of as a thread with a separate identity. As programs work on behalf of the initial request for that thread and are interrupted by other requests, the status of work on behalf of that thread is kept track of until the work is completed.

22 Eksempel

23 Hukommelsesstyring Når flere programmer kører på engang, så risikerer man at et dårligt skrevet (eller bevidst ødelæggende) kørende program overskriver et andet kørende programs hukommelsesallokering. OS sørger derfor at allokere hukommelse til et kørende program, og sikre at programmet ikke får lov til at tilgå hukommelse udenfor allokeringen. En måde for et OS at øge den tilgængelige memory er ved at benytte en swap fil eller swap partition (virtual memory). In NT-based versions of Windows (such as Windows 2000 and Windows XP), the swap file is named pagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for page files. Occasionally, when the page file is gradually expanded, it can become heavily fragmented and cause performance issues. The common advice given to avoid this problem is to set a single "locked" page file size so that Windows will not resize it. Other people believe this to be problematic in the case that a Windows application requests more memory than the total size of physical and virtual memory. In this case, memory is not successfully allocated and as a result, programs, including system processes may crash. Supporters of this view will note that the page file is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. It is however, generally agreed that a large page file will allow use of memory-heavy applications, and there is no penalty except that more disk space is used. In the Linux and *BSD operating systems, it is common to use a whole partition of a HDD for swapping. Though it is still possible to use a file for this, it is recommended to use a separate partition, because this excludes chances of file system fragmentation, which would reduce performance. However with the 2.6 Linux kernel swap files are just as fast as swap partitions, this recommendation doesn't apply much to current Linux systems and the flexibility of swap files can outweigh those of partitions

24 Filsystemer Den sidste store ting, et OS hjælper med, er et filsystem.
Hierarkisk WIN: FAT, FAT32, NTFS MAC: HFS, HFS+, NTFS (ro), FAT32 (ro), ZFS (10.5) Linux/Unix: ext2, ext3, ReiserFS, Reiser4, UDF, UFS, UFS2, XFS, ZFS, FAT32, NTFS (ro) Distribueret AFS NFS SMB Distribueret (fault-tolerant – delt over flere noder) CODA DFS Record-orienteret Mainframe: VSAM, ISAM m.fl. (en samling records) Server Message Block. SMB works through a client-server approach, where a client makes specific requests and the server responds accordingly. One section of the SMB protocol is specifically for filesystem access, such that clients may make requests to a file server, but there are other sections of the SMB protocol that specialise in inter-process communication — IPC. The SMB protocol was optimized for local subnet usage, but one could use it to access different subnets across the Internet — on which MS Windows file-and-print sharing exploits usually focus Coda is a distributed file system with its origin in AFS2. It has many features that are very desirable for network file systems. Currently, Coda has several features not found elsewhere. 1. disconnected operation for mobile computing 2. is freely available under a liberal license 3. high performance through client side persistent caching 4. server replication 5. security model for authentication, encryption and access control 6. continued operation during partial network failures in server network 7. network bandwidth adaptation 8. good scalability 9. well defined semantics of sharing, even in the presence of network failures

25 Filsystemer Navn Filnavn længde Tilladte karakterer Maksimal sti
Maksimal filstørrelse Maksimal volume størrelse NTFS 254 karak. + ”.” Alle Unicode, undtagen NULL, /, : karak. 16 exabytes HFS Plus 255 karak. Alle Unicode Ingen begrænsninger Ext3 Alle Unikode, undtagen NULL 2 terabytes 32 terabytes ZFS voldsomt Exabyte 10 opløftet i 18 Terabyte 10 opløftet i 12

26 Logiske volumer Logisk lag over fysiske diske Fordele
Sammensæt flere fysiske diske til logiske diske Ændre på størrelse af logiske diske ”On the fly” Volume managers differ but some basic concepts exist across most versions. The volume manager starts with physical volumes (or PVs), which can be hard disk partitions, RAID devices or SAN LUNs. PVs are split into small chunks called physical extents (or PEs). Some volume managers (such as that in HP-UX and Linux) will have PEs of an even size; others (such as that in Veritas) will have variably-sized PEs that can be split and merged at will. The PEs are then pooled into a volume group or VG. The pooled PEs can then be concatenated together into virtual disk partitions called logical volumes or LVs. These LVs behave just like hard disk partitions: mountable file systems can be created on them, or they can be used as raw block devices for swap. The LVs can be grown by concatenating more PEs from the pool. Some volume managers allow LV shrinking; some allow online resizing in either direction. Changing the size of the LV does not necessarily change the size of a filesystem on it; it merely changes the size of its containing space. A file system that can be resized online is recommended because it allows the system to adjust its storage on-the-fly without interrupting applications. PVs may also be organized into physical volume groups or PVGs. This allows LVs to be mirrored by pairing together its PEs with redundant ones on a different PVG, so that the failure of one PVG will still leave at least one complete copy of the LV online. In practice, PVGs are usually chosen so that their PVs reside on different sets of disks and/or data buses for maximum redundancy.

27 Hvilket OS skal jeg vælge?
Afhænger af opgave og kompetence Hvert OS har forskellige interfaces Programmer skrives specifikt til OS En applikation til et OS kører ikke på et andet Trends Cross-over som f.eks WINE, VMWare, Parallels, CodeWeavers

28 De forskellige OS og kendetegn
Mainframe OS Mission kritisk High-Volume interfaces: batch, transaction processing, time-sharing. Java support UNIX og Linus API Simpel GUI SNA, TCP/IP Eksempler: z/OS, z/VM

29 De forskellige OS og kendetegn
Server OS Kører på en server. Hvad er en server? Fokuserer på deling af hardware- og softwareressourcer Services kan f.eks. være: fil, print eller webservices Eksempler: Linux, MAC OS X Server, Windows 2000/2003, OpenVMS

30 De forskellige OS og kendetegn
Klient OS God GUI Ressource management og OS beskyttelse er mange gange lavere i og med single-user Eksempler: MAC OS X, Linux, Windows XP, Windows VISTA

31 De forskellige OS og kendetegn
Embedded OS’er Bruges typisk på devices som mobiltelefoner etc. Brugeren har typisk ikke adgang til operativsystemet TV, mikrobølgeovne, mobiltelefoner, PDA’er, Har typisk mindre memory, CPU, skærm osv. Eksempler: PalmOS, Mobile Windows.

32 Pause

33 Hardware CPU Memory Video Controller Keyboard Controller Floppy disk

34 Hardware komponenter Processors Memory I/O Devices Buses

35 Processor (CPU) Er computerens hjerne
Får instruktioner fra hukommelsen og eksekverer dem Hver instruktion skal igennem Fetch, Decode, Execute, og Write Back Fetch henter instruktioner fra hukommelsen bestemt af en program tæller Decode splitter instruktionen for brug i andre dele af processoren Under Execute bliver de dele af processoren aktiveret og udfører deres del af instruktioner Under Write Back skrives resultaterne fra ekskveringen tilbage til hukommelsen Hver type CPU har et specifikt instruktionssæt. Således har Intel Core 2 Duo og SPARC for eksempel hver sit. Konsekvens: Program skrevet til Intel Core 2 Duo kan ikke køre på SPARC En CPU indeholder også registrere, som er en speciel type hukommelse til at holde midlertidige resultater

36 Processor (CPU) De fleste CPU kan køre i to forskellige modes:
User mode Kernel mode (minder det om noget  ) I kernel mode kan CPU afvikle alle instruktioner i et instruktionssæt. Og har adgang til den faktiske hardware I user mode er der kun adgang til et subset af instruktionerne og mindre hardware

37 Hukommelse (Memory) Hukommelsen bruges til at gemme instruktioner og data, mens et program eksekverer. Man designer typisk hukommelse ud fra tre kriterier: Hastighed Pris Kapacitet Intet hukommelse er optimalt på alle områder. Man taler derfor om et hukommelse hierarki inde i en computer

38 Hukommelse

39 I/O enheder Som nævnt tidligere styrer OS også I/O enheder
VIGTIGT et typisk brugerprogram kan IKKE tilgå I/O enheder direkte En I/O enhed består typisk af to dele: En device controller som er en chip (eller flere). Typisk en lille microcontroller der er uafhængig af CPU’en og som kun er programmeret til at styre enheden. Enheden selv Eksempler: Et grafikkort eller en monitor En harddisk controller og selve harddisken En printer controller med tilhørende printer ….. En device controller kaldes ofte også for et kort eller en adapter (SCSI)

40 I/O enheder OS tager fat i device controlleren. Device controlleren tager derefter direkte fat i hardwaren. Den del af OS der taler med device controlleren hedder en device driver. En device controller vil typisk have forskellige device drivers for hvert operativsystem: Enhed Linux OS Manufacturer A controller Linux driver Win XP OS Windows XP driver Manufacturer B controller Solaris OS Solaris driver Manufacturer C controller …… ……………… Hardware Software

41 I/O enheder og drivere En device driver arbejder tæt sammen med kernefunktioner i operativsystemet og skal derfor typisk køre i kernel mode. Der er typisk tre måder at loade en driver i et operativsystem: Relink OS kærnen med driveren og reboote systemet De fleste Unix typer virker sådan. Ved at lave et entry i konfigurationsfilen for operativsystemet og fortælle den at den skal loade driveren ved boot. Typisk for Windows Load og accepter driveren mens operativsystemet kører. Giver mulighed for hot-plugging (Plug and Pray?). Kaldes for ”dynamic loading”. Man behøver ikke her at reboote. De fleste OS’er arbejder i den retning. USB behøver typisk dynamic loading.

42 Buses Al trafik mellem CPU, memory, I/O devices kører over en delt bus
Til at starte med havde man en bus. De fleste computere i dag består af mange busser. For eksempel: En PC kan have op til 8 busser: Lokal bus Cache bus Memory bus PCI bus SCSI USB IDE ISA bus A bus is a subsystem that transfers data or power between computer components inside a computer or between computers and typically is controlled by device driver software. Unlike a point-to-point connection, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together. Eksempler: UDB FIREWIRE SCSI SCSI (Small Computer System Interface) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, and electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners, printers, and optical drives (CD, DVD, etc.). The SCSI standards promote device independence, which means that, at least in theory, almost any type of hardware can be connected via SCSI. In computer storage, a logical unit number or LUN is an address for an individual disk drive and by extension, the disk device itself. The term is used in the SCSI protocol as a way to differentiate individual disk drives within a common SCSI target device like a disk array.

43 Virtuelle maskine En virtuel maskine OS kerne tilbyder virtuelle maskiner til laget ovenover En virtuel maskine er En kopi af hardware Inkluderer kernel og user mode hardware emulering Har I/O, Interrupts og alt hvad en rigtig maskine har Typisk kan en virtuel maskine køre ethvert OS oven på sig.

44 Eksempel på virtuel maskine

45 Eksempel på virtuel infrastruktur

46 Eksempel

47 Data beskyttelse - RAID
RAID søger at beskytte mod datatab pga. diskfejl Afhængigt af RAID level vil RAID kunne recover data fra en fejlet disk Nogle gange indeholder RAID hotswap andre gange er det software baseret In computing, the acronym RAID (originally redundant array of inexpensive disks, also known as redundant array of independent disks) refers to a data storage scheme using multiple hard drives to share or replicate data among the drives. The distribution of data across multiple disks can be managed by either dedicated hardware or by software. Additionally, there are hybrid RAIDs that are partially software AND hardware-based solutions. A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a PCI expansion card, or might be a capability built in to the motherboard. In larger RAIDs, the controller and disks are usually housed in an external multi-bay enclosure. The disks may be IDE/ATA, SATA, SCSI, Fibre Channel, or any combination thereof. The controller links to the host computer(s) with one or more high-speed SCSI, PCIe, Fibre Channel or iSCSI connections, either directly, or through a fabric, or is accessed as network-attached storage. This controller handles the management of the disks, and performs parity calculations (needed for many RAID levels). This option tends to provide better performance, and makes operating system support easier. Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running. In rare cases hardware controllers have become faulty, which can result in data loss. The term has become common in storage area networks (SAN) and other enterprise storage fields. Today, LUNs are normally not entire disk drives but rather virtual partitions (or volumes) of a RAID set. FREMHÆV ZFS – EVENTUEL DEMO

48 RAID Levels RAID Level Beskrivelse Striping 1 Mirroring 2
Striping 1 Mirroring 2 Hamming Code Parity 3 Byte Level Parity 4 Block Level Parity 5 Interleaved Parity 6 Double Parity (udvidelse af 5) 10 (0 + 1) Striping & Mirroring The term striping refers to the segmentation of logically sequential data, such as a single file, so that segments can be written to multiple physical devices (usually disk drives) in a round-robin fashion. In data storage, disk mirroring (which is different from file shadowing) is the replication of logical disk volumes onto separate logical disk volumes in real time to ensure continuous availability, currency and accuracy. A mirrored volume is a complete and separate copy of a logical disk volume. Parity refers to a technique of checking whether data has been lost or written over when it's moved from one place in storage to another or when transmitted between computers. Here's how it works: An additional binary digit, the parity bit, is added to a group of bits that are moved together. This bit is used only for the purpose of identifying whether the bits being moved arrived successfully. Before the bits are sent, they are counted and if the total number of data bits is even, the parity bit is set to one so that the total number of bits transmitted will form an odd number. If the total number of data bits is already an odd number, the parity bit remains or is set to 0. At the receiving end, each group of incoming bits is checked to see if the group totals to an odd number. If the total is even, a transmission error has occurred and either the transmission is retried or the system halts and an error message is sent to the user. RAID-3: This type uses striping and dedicates one drive to storing parity information. The embedded error checking (ECC) information is used to detect errors. RAID-4: This type uses large stripes, which means you can read records from any single drive. This allows you to take advantage of overlapped I/O for read operations. Since all write operations have to update the parity drive, no I/O overlapping is possible. RAID-5: This type includes a rotating parity array, thus addressing the write limitation in RAID-4. Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not redundant data (but parity information can be used to reconstruct data). RAID-5 requires at least three and usually five disks for the array. RAID-6: This type is similar to RAID-5 but includes a second parity scheme that is distributed across different drives and thus offers extremely high fault- and drive-failure tolerance. RAID-7: This type includes a real-time embedded operating system as a controller, caching via a high-speed bus, and other characteristics of a stand-alone computer. One vendor offers this system. RAID-10: Combining RAID-0 and RAID-1 is often referred to as RAID-10, which offers higher performance than RAID-1 but at much higher cost. There are two subtypes: In RAID-0+1, data is organized as stripes across multiple disks, and then the striped disk sets are mirrored. In RAID-1+0, the data is mirrored and the mirrors are striped.

49 Storage Area Network (SAN)
Hvad er en SAN løsning? Typisk et high speed network, med LAN óg Bus karakteristika, som etablerer en forbindelse mellem filsystemer (servere) og storage elementer Tænk på det som en kæmpe bus, som er sat sammen af tilsvarende teknologier, som man bruger på LAN og WAN, altså: repeaters, hubs, bridges, switches, converters og extenders SAN interfaces er typisk Fibre Channel… og ikke Ethernet eller ATM.

50 Hvorfor SAN Reduktion af TCO Bedre styring af ressourcer
Skalerbar storage Nemt at bruge, ligner bare endnu en fysisk disk Kan både bruges over fiber channel og IP

51 NAS NAS bruger typisk eksisterende IP netværk og fungerer som en appliance AFS, NFS eller SMB support iSCSI er er meget brugt buzzword SCSI over Internettet. The iSCSI (pronounced eye-skuzzy) protocol uses TCP/IP for its data transfer. Unlike other network storage protocols, such as Fibre Channel (which is the foundation of most SANs), it requires only the simple and ubiquitous Ethernet interface (or any other TCP/IP-capable network) to operate. This enables low-cost centralization of storage without all of the usual expense and incompatibility normally associated with Fibre Channel storage area networks. Critics of iSCSI expect worse performance than Fibre Channel due to the overhead added by the TCP/IP protocol to the communication between client and storage. However new techniques like TCP Offload Engine (TOE) help in reducing this overhead. Tests have shown excellent performance of iSCSI SANs, whether TOEs or plain Gigabit Ethernet NICs were used. In fact, in modern high-performance servers, a plain NIC with efficient network driver code can outperform a TOE card because fewer interrupts and DMA memory transfers are required. Initial iSCSI solutions are based on a software stack. The iSCSI market is growing steadily, and should improve in performance and usability as more organizations deploy Gigabit and 10 Gigabit networks, and manufacturers integrate iSCSI support into their operating systems, SAN products and storage subsystems. iSCSI becomes even more interesting since Ethernet is starting to support higher speeds than Fibre Channel.

52 High-Availability (HA) Clusters
Løst koblede computere, som arbejder sammen og udadtil agerer som en computer High-availability (HA) clusters High-availability clusters (a.k.a Failover clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure. High-performance (HPC) clusters High-performance clusters are implemented primarily to provide increased performance by splitting a computational task across many different nodes in the cluster, and are most commonly used in scientific computing. One of the most popular HPC implementations is a cluster with nodes running Linux as the OS and free software to implement the parallelism. This configuration is often referred to as a Beowulf cluster. Grid computing Grid computing or grid clusters are a technology closely related to cluster computing. The key differences between grids and traditional clusters are that grids connect collections of computers which do not fully trust each other, and hence operate more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections than are commonly supported in clusters. Grid computing is optimized for workloads which consist of many independent jobs or packets of work, which do not have to share data between the jobs during the computation process. Grids serve to manage the allocation of jobs to computers which will perform the work independently of the rest of the grid cluster.

53 Load Balancing Løst koblede computere der kan udfører det samme arbejde. En load-balancing front-end distribuere arbejdet mellem serverne. Load-balancing clusters Load-balancing clusters operate by having all workload come through one or more load-balancing front ends, which then distribute it to a collection of back end servers. Although they are primarily implemented for improved performance, they commonly include high-availability features as well. Such a cluster of computers is sometimes referred to as a server farm.

54 Opgaver 


Download ppt "IT Arkitektur og Sikkerhed"

Lignende præsentationer


Annoncer fra Google