Sometimes get some cross-segment engineering and application, you need to make accurate know Telecom, China Netcom and China railcom and other telecom operators assigned IP addresses, online information not only rarely, and often are N months before expiration information ...
APNIC is management Asia Pacific IP address allocations of institutions, it has a wealth of accurate IP address allocation library, at the same time the information is publicly! here let us look at how to get some Linux telecom carriers of IP address allocation: shell > wgethttp://ftp.apnic.net/apnic/dbase ... se-client-v3.tar.gzshell > tarxzvfripe-dbase-client-v3.tar.gzshell/configureshell cdwhois-3.1shell >. > > make install the compiled work, we start getting IP addresses; China Netcom: shell >. >/whois3-hwhois.apnic.net-l-imbMAINT-CNCGROUP/var/cnc China Telecom: shell >. >/whois3-hwhois.apnic.net-l-imbMAINT-CHINANET/var/chinanet China tietong: shell >. >/whois3-hwhois.apnic.net-l-imbMAINT-CN-CRTC/var/crtc open gets files can see inside the information very carefully, you can even see the heads of each branch, phone, e-mail, and more information. If you want to get a neat clean IP address section of the file, just use the simple filter grep and awk:)Linxu Security , the latest breaking news and information on security, linux, open source, firewalls
Sunday, March 27, 2011
There are techniques for the Linux operating system shutdown
III. analysis of the Shutdown command.
Because Linux is a multiuser system. At the same time there may be multiple users log on this system. Such as through SSH commands to remote login, etc. The system administrator may want to turn off before to the current Linux system with all of the logged-on user issues a warning. Or in ten minutes after shutting down the system and so on. If the system administrator to implement this functionality, you need to use the shutdown command to shutdown. Shutdown command, you can safely turn off or restart Linux system. This command will close the system to the system before all the logged-on user (including remote login user) prompt a warning message. This command also allows system administrators to specify a time parameters, within the prescribed time after shutting down the system. As can be a precise time (how long something) or start of a period of time (ten minutes). Use this command all the processes on the system will receive a SIGTERM signal. This would have the advantage. It can make the text editor vi, and so has the time to be in edit State files. Mail and news, application process, you can move all the data in the buffer pool for proper cleanup, etc. So this is a very user-friendly shutdown command. System administrator run the shutdown command, the system notifies the init process that requires it to change run level to implement a specific functionality. Run level 0 to turn off the system run level 6 is used to restart the system to run level 1 is used to enable the system to perform system administration tasks. If you do not give the-h or-r parameter, this is the default shutdown command. System after the execution of the order, automatically synchronizing data. So this command takes a little longer may be. But if you consider data synchronization, then the administrator, so time is worth it. Here refers to a run-level concepts, it is a Linux system and Windows system differences larger one point. The so-called run level is a system software environment configuration. In a specific environment only allows a group of selected processes exist. Init to different run level is derived from the process. Init can boot into eight different run level, including 0-6 run level and S or s runlevel. Run levels can be superuser by telinit command to convert. This command converts signals can be passed to init, tell it to switch to which run level. Run level 0, 1 and 6 for the system to run level reserved private. Run level 0 is used to shutdown, run level 6 is used to restart, run level 1 is used to put the computer into single user mode. Runlevel S is not used directly to us, more is to enter runlevel 1 run some executable script is called. In this command, you can join different parameters for different purposes. If shutdown-k command, and not a true shutdown, but simply send a warning signal to each logon users (including remote login). The system administrator shuts down the system before you use this command to tell the other users logged on, is a good practice. In addition, sometimes Linux system as Windows system, the system shut down and turn off the power supply is not active. You can use the command shutdown-h, clearly indicates that the system to turn off the power at the same time after power off. Note that this command and halt shutdown commands, called getuid system processes to determine whether the current user as root user. If Yes, continue with the following shutdown actions; if the current user as a normal user, you exit the command directly.Kernel compile procedure 2.6x
Abstract: in Kernel2.6x and previous versions of the compilation method is a bit different, so I simply write out the compile process, mainly help beginner Linux brother.
Kernle2.6x compilation is relatively simple, most suitable for me, this article is to newbie!! Redhat, for instance, in the version 2.6.0-test8 first, second, unzip the downloaded http://www.kernel.org 1. to download the good kernel onto/usr/src directory, for example, I downloaded a linux-2.6.0-test8.tar.bz2 # mvlinux-2.6.0-test8.tar.bz2/usr/src # tarjxvflinux-2.6.0-test8.tar.bz2 2. entry into the appropriate directory compilation and installation, enter the kernel option settings # cdlinux-2.6.0-test8 # makemrproper # makemenuconig into kernel tab, M is compiled in fast mode, * directly into the kernel which are included in the kernel, which are compiled as modules. In Article Manager and underlying the discussion area of the post, they try to find. In addition the kernel's doc also have corresponding document 3. compile and install the kernel # make # sudomakeinstall this process is to compile and install the kernel process, the system automatically to what we do? 1] system will produce bzImage/usr/src/linux/arch/i386/boot/directory and copy the bzImage to vmlinuz-2.6.0-test8 to/boot directory, and create proposals vmlinuz-2.6.0-test8 link vmlinuz; 2]/boot directory, the system produces the System.map-2.6.0-test8, and create a link to it; 3] System.map/boot directory, automatic hygiene for initrd-2.6.0-test8.img; 4 modify/etc/grub.conf file] has joined the new kernel startup items. [Note] the big baby bear brothers, fear beginners brothers do not understand this process, so this procedure detailed written out, so I had to add. The four points above is a system to automatically install the kernel do tasks. Take a look at/boot files in a directory and the/etc/grub.conf will understand. This is not the same as previous 2.4.x. A parable is 2.4.x or semi-automatic, here's something you want to do a lot of commands in the 2.6.x, step into place, is a fully automated! 4. compiling and installing modules express # makemodules # makemodules_install 5. set the/etc/grub.conf, I don't use lilo, or rather, I don't understand, so I can only say that the settings for Grub. Why do you want to set this? actually compile and install the kernel, the system automatically starts the new kernel is added directly to the/etc/grub.conf. When we do not make changes directly to the/etc/grub.conf enabled the new kernel will find tips such as VFS error. The following is my new kernel after installation, has not changed/etc/grub.con titleFedoraCore (2.6.0-test8) root (hd0, 6) kernel/boot/vmlinuz-2.6.0-test8roroot = LABEL =/boot/initrd/initrd-2.6.0-test8.img titleFedoraCore (2.4.22-1.2061.nptl) lock root (hd0, 6) kernel/boot/vmlinuz-2.4.22-1.2061.nptlroroot = LABEL =/boot/initrd/initrd-2.4.22-1.2061.nptl.img changes is titleFedoraCore (2.6.0-test8) root (hd0, 6) kernel/boot/vmlinuz-2.6.0-test8roroot =/dev/da8 initrd/boot/initrd-2.6.0-test8.img titleFedoraCore (2.4.22-1.2061.nptl) lock root (hd0, 6) kernel/boot/vmlinuz-2.4.22-1.2061.nptlroroot = LABEL =/boot/initrd/initrd-2.4.22-1.2061.nptl.img contrast/etc/grub.conf changed, it is not difficult to find, in our new kernel 2.x, specify the root of the location of the root partition, no use label LABEL =/, instead of using a real partition location. Please on Linux not too cooked brothers, do not copy my partition settings. If you want to know that, in the underlying the discussion area, about GRUB and partition access article, in Article Manager. Finally there is another point is that if you display card drivers originally installed, in the new kernel reinstall。 For example, I have a NVIDIA graphics card, I downloaded a linuxsir.org downloads area, brother provided a patch for the NVIDIA drivers. With the OK.Linux-based network test system design (2)
Linux system configuration of chapter II, section 1, Linux network settings TCP/IP is the most widely used UNIX networking protocol, Linux as a kind of UNIX operating systems are no exception.
Linux since the result of it as part of their own are inseparable. Linux system kernel directly on to the network provides a strong support, it supports many network devices (from the network card, Modem, ISDN adapter to various routers, etc.), and it inherits from many Unix performance network software, so that Linux has a good network performance. Linux not only support existing network protocol, and because the Internet thousands of networks of experts involved in its development, it also supports some future new protocol such as IPv6 ready with beta version. As a result of this experiment system is Linux-based network lab systems and thus become familiar with the system after the work is to study Linux network configuration and the configuration and management methods. Network card installation: Linux in the kernel has on various Ethernet network card is supported, as long as the time to compile the kernel to support the network and the network model of driver compiled into the kernel. In General, on the network card support code is compiled into a dynamically loaded module. There is a kernel support, we must also put NIC i/o port number (for the network card 0x320) and interrupt number (this card is 11) passed to the kernel, so the system can really control adapter for network communication. We must get it loads in the file/etc/conf.modules (this system is also loaded with a sound card module), the contents of this document are as follows: aliaset0ne optionsneio = 0x320irq = 11 aliassoundsb aliasmidiopl3 optionsopl3io = 0x388 optionssbio = 0x220irq = 7dma = 1mpu_io = 0x201 we can also be integrated GUI programs are configured to run in a Terminal window, linuxconf system most of the settings. IP address and network adapter settings we will use the control panel to load the configuration program, run the following window will appear controlpanel, it contains many of the configuration tool, from system daemons, network settings, user management, software packages installed on the www server Apache configuration program. In Control Panel, select network configuration, the following window will appear in the diagram above click on the Add the following window appears: the IP of a bar is filled in with the IP address of 172.31.0.10 computer, the network mask of 255.255.255.0, and the other is the default. Select Done to exit. In addition, we can also directly in the configuration file, add the appropriate content. In the build file/etc/sysconfig Network, which reads as follows: NETWORKING = yes FORWARD_IPV4 = false HOSTNAME = linuxserver.ec.edu DOMAINNAME = ec.edu GATEWAY = "172.31.0.200" GATEWAYDEV = "et0", access gateway settings: in the network configuration window, select the Routing, as shown in the picture: in the above diagram click Add, the following window will appear. The device fills in a bar, in the Network is filled in with the eth0 network (network mask, and gateway and), fill in the Netmask 255.255.255.0 netmask, fill in a bar Geteway gateway address 172.31.0.200 (laboratory of gateway addresses). Select Done to exit. Another way is to directly write to the configuration file, create a file in/etc/sysconfig static-routes, its content is: eth0net172.31.0.0netmask255.255.255.0gw172.31.0.200 III test network connection and operation of 1, testing, and subnets other client communication. Use the ping command to test for the IP address for example: 172.31.0.105 in a Terminal window, type "ping172.31.0.105", Linux host will send a message to 172.31.0.105 host and make it return the package as shown in the following diagram: see time values greater than 0, then the basic network settings are correct. If the time value is equal to-1 the network setting is problematic. Then, you can use telnet, ftp, and other programs and then test it. The second section of the file system, user, user interface management, file system, user management, user management, file system, RedHat system provides an integrated management program Linuxconf divided into character interface mode, the graphic interface mode and the Web interface management mode (at the back of the experimental system management will further introduce). It not only can handle the file system, user management, will also be able to deal with network devices to load and start the program settings, and so on. You can call it linuxconf, it automatically determines whether to x-windows graphic graphic environment, and then calls the appropriate program, were as follows for the graphics mode of Linuxcon: in the integrated management program, we can be initialized in an experimental setting, to join the user group IT; add experimental management account and test user account (user1 ~ user50); at the same time set disk quota management. 1, the disk quota settings on the file system settings, disk quota management is very important. For this experiment, because to many students with experimental service and system of hard disk space. So every student experiments are for disk quota allocation. Also a good Linux system offers quota to set disk quotas, and so on the user account is assigned the appropriate disk space, more than a certain amount on the alarm, to a certain amount to do not allow users to upload files. In this way and experiment management programs combine to make the system does not consume disk space and stop working. The following settings for specific (each user of the bottom line is 5 megabytes of warning, flexible space is 2 megabytes): available quota, quotaon and quotaoff command for quota management disk quotas in several areas of the hard drive using the restricted amount. We can use the GID of a group of users, or to restrictions for a single user. Manage hard disk quota used various command as follows: quota--generated report quotaon-hard disk quotas for a user to open a disk-quota feature and set hard disk quota quotaoff-a user closes the disk quotas feature repquot--also generates reports on the hard disk quota edquota--edit adjust user quota quotacheck--check file system quota usage first logged on with the root, and on the file system table/etc/fstab for editing, because our experimental system to make the disk quota of/dev/had management, so we have to modify the options on it, the entire file as follows:/dev/hda1/ext2defaults11/dev/hda5/userext2exec, dev, suid, rw, usrquota, grpquota12/dev/hda6swapswapdefaults00/dev/fd0/mnt/floppyautosync, user, noauto, nosuid, nodev, unhide00/dev/cdrom/mnt/cdromautouser, noauto, nosuid, nodev, exec, ro00 none/procprocdefaults00 then we generate a/user quota.user and quota.group files to, and make their permissions to 600. Following the use of touch and chmod command: # touch/user/quota.user # chmod600/etc/quota.user # touch/user/quota.group # chmod600/etc/quota.group, now restart your computer, the system of quota began work. Next, we use the edquota-uusername on user username for disk quota settings are as follows (on user user1): # edquota-uuser1 edquota will invoke an editor reading into/user/quota.user let's edit as follows: the first soft parameters to hard drive (in KB) the minimum number of warnings, first argument is a hard disk (in KB) the maximum amount of the distribution; the second parameter is the number of files, soft minimum warning number, the second parameter is the number of files, hard. Now, the user user1 early stage settings, the system default is one week (7 days), it is clear that this is a long time. We will set it for 5 minutes, so that in the time period for excess user alarm. The following use edquota (or user user1): # edquota-tuser1 edquota or/user/quota.user, we set it in the appropriate place, appear as follows: Finally, check the user's quota is correct, use quota commands to check the user's disk quotas are set with Linuxconf/dev/hda5 for quota disk set as Accesslocaldrviceà/dev/hda5 double-click: down the disk quota program work, such as figure makes its role userquotaenabled and groupquotaenabled: and then, on each user disk quota settings, in the settings the lowest Diskspacesoftlimt warning disk quota within the Diskspacehardlimit sets the maximum amount of disk, set the warning time within Diskspacegraceperiod, set within the Filessoftlimit lowest number of files within the Fileshardlimit sets the maximum number of files, set the warning time within Filesgraceperiod: practical test for a disk quota for an account and login, the system copies some large files (more than 7 trillion) to see whether the system has a warning message and does not allow the user to upload the file again. The following figure is in the shell, exceeding the disk quota warning level warning information: in addition, use FTP to upload a large number of files for testing, more than the amount after system deny user to upload files. Second, the user interface to manage because our system for the experimental system, therefore, necessary to the user interface to change something, add some essential tips to help test. 1. telnet login interface settings change/etc/rC.local, the corresponding statement on issue.net commented out. : # Echo "" >/etc/issue.net # echo "LinuxMandrake $ R" > >/etc/issue.netAbout Apache several common application examples and analysis
A. how to give each user set up separate home? default settings, you need in your user home directory create a public_html directory, and then put your page file in the directory, enter the http://servername/~username access, but please note the following: 1. login as root, modify the user's home directory permissions (# chmod705/home/username), let other people have the right to enter that directory browsing.
2. login with your user name, create a public_html directory, ensure that the directory has the correct permissions for others to enter. 3.Apache default home page is index.html, index.htm is not, you can change the line in the file/etc/mime.types like below. Text/htmlhtmltm then Apache will read your index.htm file 4. the user's own home directory under the directory it is best to create permissions are set to 0700 to ensure that others cannot access. B. how to set up virtual hosts? 1. assuming that a server IP to 192.168.11.2, to a virtual one IPaddress, plus 192.168.11.4 following lines to/etc/rc.d/rc.local/sbin/ifconfigeth0: 0192.168.11.4/sbin/routeadd-host192.168.11.4et0: 0 2. Add the following line to the/home/httpd/conf/httpd.con VirtualHost192.168.11.4 (this line is the ">) ServerAdminyour_email_address DocumentRoot/home/httpd/foldername ServerName virtualservername ErrorLog/var/log/httpd/foldername/error.log TransferLog/var/log/httpd/access_log/foldername/VirtualHost (this line is the" >) 3. If your LAN has a DNS server, plus the corresponding item 192.168.11.4---> virtualservername C. how to use Apache to password-protect a directory. default, can be delegated in a directory with a .htaccess file, like this: AuthNamestuf AuthTypeBasic AuthUserFile/etc/.userpasswd requirevalid-user in order to give the user user1 into the access, use # htpasswd-c/etc/password assigned to user1 .userpasswduser1. D. how to send a directory share out available browser access? as/home/ftp/pub/1. Add the following line to change the default file type/home/httpd/conf/srm.confAlias/pub/home/ftp/pub/2., change in behavior:/home/httpd/conf/srm.conf DefaultTypeapplication/octet-stream3. restart Apache./etc/rc.d/init.d/httpdrestarVpn under Centos (pptpd) deployment
One, installed on centos5 following installation pptpd simplest installation just download the RPM package pptpd-1.3.4-1.rhel5.1.i386.rpm, then direct implementation of rpm – ivhpptpd-1.3.4-1.rhel5.1.i386.rpm.
Fortunately this package does not have other dependencies, therefore installation should be the exception. If you use a source install ppp version occurs, having high remove pppd2.3.4 then install pppp2.4.3, but trouble. After you install the rpm package, automatically generated several configuration file, main configuration file options file/etc/ppp/options.pptpd/etc/pptpd.conf,, account file/etc/ppp/chap-secrets. behind configuration pptpd mainly on these files being modified. Second, vpn topology vpn access logically exist three network: 1, wish to access the aim network. Usually in the vpn server is located on the internal network (the VPN server has two network cards, one is a public network and one private network) 2, Vpn's public network. 3, a VPN connection with the client after the formation of the virtual network. Proposed separate sets a network so that it doesn't take up a private network behind the vpn (destination network) IP resources/address. Of course a VPN tunnel network with destination network is the same network segment, however this is not recommended.BleachBit: Linux system clean-up workers
BleachBit is a specially designed for Linux design system cleanup tool.
Use BleachBit, you can clean up system cache, history, temporary files, cookies, and other unwanted things, so you can free up your disk space. Currently, BleachBit to cleanup Beagle, Firefox, Opera, Epiphany, software such as Flash, openoffice.org generated junk files. As shown, click on the item needs to be cleaned up, press Preview to see what things to clean up from the system. If you believe that you are correct, you can press Delete to remove these contents. BleachBit provide rpm and deb binary package for Fedora/RHEL/CentOS/Debian/Ubuntu, openSUSE and other Linux distributions. Other Linux users can choose the source package BleachBit. BleachBitTroubleshooting Linux operating system panic treatment methods
Usually after a system crash occurs, you will worry again fails, but found the system log does not record any information before and after the panic, unable to parse the cause of the failure that has no cure.
However, in practice, there are several mechanisms to Linux guarantees after a system crash occurs, you can get valuable information for the analysis of the problem. Identify a hardware failure, or an application bug. Linux, there are several ways to obtain various crashes resulting information. 1.Coredump Coredump is typically used to debug an application error when running certain applications to crash when an exception occurs, you can turn on the system's function to get the coredump a program crash memory information, to analyze crash causes:/etc/profile Riga (or modify): run the command: ulimit-c0 sysctl-w "/coredump/%n.core kernel.core_name_format =" the command mean core files in the directory, file name/coredump is process name + .core 2.Diskdump diskdump tool provides a single machine, create and collect vmcore (kerneldump) of capacity, without using the network. When the kernel itself when a crash occurs, the current memory and CPU status and related information will be saved to a disk on the diskdump reservations partition. At the next reboot when the system restarts, diskdump initialization script will read from reserved partition saved information and create a vcore file, and then the file is again stored at/var/crash/directory, file name is 127.0.0.1-the following is a configuration HPSCSI device enabling the diskdump process, if not HPSCSI device (i.e. device named/dev/sdX form), you do not need to perform the third and fourth in two steps. But in the first step before executing the command: modprobe diskdump first step: edit/etc/sysconfig/diskdump file, a blank partition after the device name is filled in, for example, save the exit: DEVICE =/dev/cciss/c0d0p2 second step: initialization dump device # servicediskdumpinitialformat warning: the partition so the data will be lost. Step three: use the replace the current cciss_dump module cciss module: in/etc/modprobe.conf find the following line: aliasscsi_hostadaptercciss is amended as follows: Add row: aliasscsi_hostadaptercciss_dump optionscciss_dumpdump_drive = 1 Note: assuming the diskdump file configured for/dev/cciss/c0d [# a] p [# b], please set to: optionscciss_dumpdump_drive = [# a] step four: rebuild initrd file: # mv/boot/initrd-would uname-r would .img/boot/initrd-would uname-r would .img.old # mkinitrd/boot/initrd-would uname-r would .img would uname-r would step five: set the diskdump service able to boot from boot: # chkconfigdiskdumpon 3.Netdump if you use red flag DC4.0 or 3.0 version system that cannot support the diskdump, you can use to achieve the output vmcore netdump. But the Netdump requires at least one server, and any number of clients. The server used to receive client information panic, the client is often panic a machine. (A) of the server configuration: (1). the examination whether the netdump-server is installed: rpm-qnetdump-server if not installed, your CD RedFlag/RPMS/directory to find the start package, netdump-server execute the command: rpm-ivhnetdump-server-x.x.x.rpm (x is the version number) to install. (2). the server package installed, use the command: passwdnetdump change user's password. (3). the open service: chkconfignetdump-serveron (4). the running server: servicenetdump-serverstartLinux system to open the Firewall automatically opens the corresponding port
When Linux open behind a firewall, you'll find, from the native login 23 port is no problem, but if from another PC to log in to the linux system, you will find tips for this error: cannot open a connection to host, on port 23: Connect failed because the linux Firewall is turned off by default, if 23 port to allow remote login, you can turn off the firewall, or you can open the firewall open port 23 as follows: with immediate effect, restart after failure in opening: serviceiptablesstart close: serviceiptablesstop restart after opening: chkconfigiptableson close: chkconfigiptablesof in opening up a firewall, make the following settings, open the relevant ports modify/etc/sysconfig/iptables file, add the following:-ARH-Firewall-1-INPUT-mstate — stateNEW-mtcp-ptcp — dport23-jACCEPT
Use sudo reinforcement Linux system security
3. install Sudo will download the zipped package Sudor we specify a directory, such as/root directory.
Regardless of the operating system, these operations in any system is similar. ⒈ switch to Cabinet in the same directory, and then extract it, command as follows. Note that when you perform the following commands, because the version is not necessarily the same, so the version number of the real case modifications: tar – z x v f u d o s-1.6.3 p 5. t z a r-g above commands will create a new ⒉ a directory, such as sudo-1.6.3p5, depending on your version. ⒊ through the following command to switch to sudo directory: c d s u d o-1.6.3 p 5 ⒋ using the following command to create a makefile and config.h file, we will use them to configure sudo:./c o n f I g u r e ⒌ you can also add/configure command-option to customize the installation of sudo. In fact, very simple, just behind the/configure command appends the options you want. To understand the various options available, see/sudo/INSTALL file. ⒍ you can also edit the makefile to change the default installation path, you can also edit/sudo/INSTALL other configuration in the file. To do this, we need to use a text editing program to open the makefile. For example, type the following command: v I f e M a k I l e ⒎ found in the Makefile file to the beginning of the "Wheretoinstallthings ..." section, as shown in the figure: figure 1Sudo Makefile file ⒏ if necessary, you can change the default path. But we will use the default path. ⒐ exit the file. If you want to use the vi text editor, you can use the following command:: q ⒑ in practice, we run the preceding command, the-/configure can also change the default installation path. To do this, you need the command followed by an option. For example, when the sudoers file is by default installed in the/etc directory, we can use the following command to change the installation location for the file:./configure--sysconfdir = DIR DIR here is a new installation directory. ⒒ sudo for compiling, you need to run the make command: m a k e ⒓ if we want to put sudo install in the source file directory beyond words, you will need to use GNU. If an error occurred during installation, you can ask for help in TROUBLESHOOTING file and PORTING file. ⒔ We must act as the root user can install sudo, because this requires superuser privileges. Become root user runs makeinstall commands after installation manual page, the sudoers file: visudo and makeinstall necessary to remind that, don't you can overwrite any existing sudoers file. ⒕ well, we have installed sudo, then describes how to configure it to meet our needs.Friday, March 18, 2011
Embedded systems and how to construct an embedded system
Most of the Linux system running on the PC platform, however, Linux is also available in embedded systems and reliable work.
This paper describes an overview of the embedded system, as well as the demonstration of a commercial embedded Linux application to system problems. Embedded systems--those than Moses also old to control equipment of computers, or embedded system, almost the same computer itself early in our surrounding. In the communication field, these embedded system as early as the late 1960s, he or she is used to control the telephone exchange of electronic machinery and is referred to as "stored program control control" system "computer" in the fashion not common; the so-called stored procedures are those that put a program and routing information in memory. Store the control logic instead of the hardware that is in the concept of a real breakthrough, nowadays, we believe that this kind of mechanism for granted. For each application, the computer is to be made out (in short, these computers are application-oriented). the press today's standards, they have strange private instruction and main calculation engine integrated i/o device, like a mutant people. Microprocessor by providing a compact and inexpensive and can take a large system like building blocks that use CPU engine changed this situation; it is based on using a bus-hook together different peripherals are built to strict hardware architecture and provides a simplified programming of general purpose programming model. Together with the hardware, the software has also made progress. initially, only a few simple development tools available for creating and debugging software. each project running software is usually based on how warp doodling. because the compiler often have many errors and a lack of decent debugger, these software almost always use assembly language or macro language to write software building blocks used and standard library programming ideas until the mid-1970s to pop up. For embedded systems and "shelf"-independent operating system (OS) in the late 1970s began to appear. many of the them are written in Assembly language and can only be used to write the microprocessor. when these microprocessors become obsolete when they use the OS also doom with Pro-only in the new processors from the new write once to run today, many of these early system just became fuzzy memories of people, there are people who can remember MTOS? when C language appears, the OS can use an efficient, stable and portable way to write this way on the use and operation of a direct appeal since it hosts people abandoned when the microprocessor is able to protect their software investment hopes. sounds, a bit like the commercial marketing of a legendary story written using C OS has become a standard today in summary, the software reusability has been accepted and is very well. In the early 1980s, my favourite OS is an operating system; perhaps as long as Wendon 150 dollars, you can obtain one c source code library. it is a development kit that can be viewed by choosing the number of components to build their own OS---the entire process is like dining from Chinese food menu, you can for example as. from a library of more than one feasible option list, select the task scheduling and memory management scheme. Many embedded systems of commercial operating systems in the 1980s was flourishing. (Wendon) this original stew has developed into a commercial operating system this modern stew. today already have dozens of commercial operating systems to choose from-a number of competing products, such as VxWorks, pSOS, Neculeus and WindowsCE. Many embedded systems there is no operating system, only one control rings. effects very simple embedded systems, this may have been enough. However, as on the complexity of embedded system, an operating system is important, because otherwise, would enable (control) software complexity becomes unreasonable. Sadly, reality does have some complicated another formidable embedded systems, and they are becoming more complex because of their designers insist that their systems do not require the operating system. Gradually, more and more embedded systems need to be connected to some network, hence the need for embedded system is a network protocol stack (support); and even in many of the hotel door handles have a connection to the network of the microprocessor. The network stack is added to a control loop is only used to implement simple embedded systems brings complexity may be enough to arouse the desire for an operating system. In addition to a variety of commercial operating systems, there are many other private holding the operating system., many are graffiti-written, like Cisco IOS, etc of the company; some are from another operating system, rewriting, like many network products are derived from the same version of Berkeley UNIX operating system, because the latter has full network support capacity; but there are also the OS based on the public domain, such as the source to KA9Q PhilKarn. As a candidate of embedded operating systems, Linux has some compelling advantages: it can be ported to multiple different structure of CPU and hardware platforms, good stability, performance, and develop the ability to upgrade more easily. Development tools--breaking the barriers of traditional emulator when developing embedded systems are extremely critical of the various available tools-just like any business, good tool helps quickly and satisfactorily completed tasks; in embedded system at different stages of development, you might want to use different tools. Traditionally, developing embedded systems preferred tool is the emulator. This is a relatively expensive equipment, General plug to a microprocessor and its bus between circuit, thereby allowing the developer to monitor and control all the input and output the microprocessor of the activities and behaviors in together, there may be some difficulties, and because of their intrusive,On the back may cause unstable performance; despite this, they can bus level a system of clear what is happening and eliminate the depiction of many hardware and software interface to the bottom of the guess work. In the past, some projects rely on it--often in the development cycle stages--as the main debug tools. However, as soon as the preparation of the software is capable of supporting a string-type oral, a lot of debugging without ICE, use a different approach to completing the. Similarly, the majority of the new generation of embedded system adopts quite like a cookbook of microprocessor design; communications work in the startup code often have to make the string-type oral work as soon as possible, this means that developers can not ICE can make good progress; to get rid of the ICE, reducing development costs. once string-type oral can work together, will be used to support the increasingly complex development tools-related (software)-GNUC compiler based LINUX; the latter as a GNU toolset, and source-level debugger gdb work together, provides the development of an embedded Linux system with all the software tools, the following is a new hardware development a new embedded LINUX system to use typical debugging tools and the sequence of steps: 1: write or transplantation a startup code (discussed in detail later); 2: write code in a string-type oral a string on the output, like "Hello, World!" (In fact, I prefer human invention, on the phone said the first sentence "Watson, comehereIneedyou"); 3: transplantation gdb target code to be able to work on a string-type oral-this will allow to another computer that is running gdb program Linux host session; you just have to tell gdb is string-type oral debug the target program; gdb by string-type oral and your test computer gdb target code session and gives all C source level of debug information you can also use this ability to (communication) additional code downloaded to RAM or Flash. 4: use gdb to implement the remaining until LINUX kernel began to take over all hardware and software of the initialization code. 5: once the Linux kernel startup, the string-type oral becomes a Linux console port and can use its facilities for subsequent development process-and then use the debug version of gdb kernel kgdb. This step is often not necessary if you have a network connection, for example, 10BaseT, you might want it to work immediately. 6: If your target platform running Linuxkernel is fully functional (i.e.: without the deletion of a function), you can use gdb or its graphical alternatives such as xgdb to debug your application process. Real-time--is it true? rash, most of the system is not the case. Embedded system has often been mistakenly said to do real-time systems, but most of them do not have the real-time characteristics. Real-time performance is just the opposite. Real-time stricter definition should be a hard real-time: in a very short period of time (in milliseconds), and in response to a sure way to handle the event. Now, many hard real-time functionality is increasingly concentrated in the DSP or ASIC design, through the appropriate hardware, such as FIFO, DMA or other specialized hardware. On most systems, with 1 to 5 MS response time for real time should be sufficient. Of course, another loose requirements are acceptable, for example: Windows98 processing monitor crashes, the interrupt request must be processed within 4 microseconds, representing 98% of all cases; and in 20 microseconds, the situation of 100%. These liberal real-time requirements can be easily reached. achieving their process involves a number of discussion, including switching and interrupt latency, task selection and scheduling. Switching operating system once become a hot topic, however, since most of the CPU on this point deal comparatively satisfactory and the CPU speed is now changing fast enough, switching is now no longer a major concern. Immediate strictness of requirements should normally be represented by an interrupt routine or the kernel of the scene-driven functions to handle to ensure that the behavior of consistency when interrupt occurs, processing the amount of time spent in the break, interrupt latency, in large part, by the interrupt priority and other temporary shielding the disruptive software decisions. (Live system) interrupt must be efficiently design and arrangements to ensure that you meet the requirements of the time, just like in other OS as in IntelX86 processor series, this work can be expanded real-time Linux handle. (Real-time Linux, namely: RTLinux, see http://www.rtlinux.org/). in essence, it provides a Linux as its background tasks while running interrupt processing Scheduler. (ThisessentiallyprovidesaninterruptprocessingschedulerthatrunsLinuxasitsbackgroundtask) key (critical) interrupt can not rest for Linux known access services (handling) and therefore, you have control of the critical time. such an approach provides real-time level and time restricted more lenient basic Linux level between interface and provides a similar with other embedded operating system for real-time processing framework. ultimately, in order to meet the real-time performance requirements, using the real-time performance of key (critical) code snippets isolate and efficient arrangements, and on the outcome of that code and in a more general way (perhaps at the process level) for further processing. Embedded systems--define a view: If an application has no user interface, so that the user cannot directly interact with it, then it is embedded systems-this of course is too simplistic. elevator control system is an embedded system, but there is aSelect household interface: floor button and display elevator arrived several layers of indicator-for those who connect to the embedded systems network, if the system contains a use to monitor and control the Web server, the interface difference even more blurred. a better definition it should be emphasized in the system of important features or major purposes. Since Linux can provide one for embedding function in basic kernel and you want the user interface elements, so Linux has a strong common characteristics.Linux operating systems for detailed analysis of the file property
5. setuid and setgid bits; this part of the contents in order to understand, take a look at it; 5.1setuid and setgid captions; setuid and setgid bit is to allow ordinary users to the role of the root user runs only the root account to run the program or command.
For example, we use ordinary users run passwd command to change their own password, in fact the final change is/etc/passwd file. We know/etc/passwd file is user management profile, only root can change the permissions of the user. Root @ localhost ~] # ls-l/etc/passwd-rw-r-r-1rootroot237904-2113: 18/etc/passwd as an ordinary user if you modify your password by modifying/etc/passwd is certainly not complete the task, but not by a command to modify it. The answer is Yes, as a normal user can use passwd to change their own password. Thanks to the passwd command. Let's take a look at; root @ localhost ~] # ls-l/usr/bin/passwd-x r-s — — — — x1rootroot2194402-1216: 15/usr/bin/passwd because/usr/bin/passwd file has set the setuid permission bits (that is, r-s — x — x of s), so the average user can temporarily become root, indirectly modify/etc/passwd to modify their own password. We know that Linux user management is very strict, different users have different permissions to complete only the root user to complete the work, we must elevate privileges for ordinary users, the most common method is su or sudo. While the setuid and setgid are let ordinary users beyond their normal permissions to root permissions, but I do not recommend you use, because it can be a security risk for your system!! Note: setuid and setgid are at risk, so the less as possible to learn to understand both ~ ~ ~ 5.2setuid and setgid instance application; we want to make a normal user has root user owns the beinan Super rm delete permission, we use su or sudo to temporarily switch to the root operation, you can do me. root @ localhost ~] # cd/home Note: Enter/home directory root @ localhost home] # touchbeinantest.txt note: create a test file; root @ localhost home] # ls-lbeinantest.txt note: viewing file properties;-rw-r-r-1rootroot004-2418: 03beinantest.txt note: file properties; root @ localhost home] # subeinan Note: switch to the normal user beinan beinan @ localhost home] $ rm-rfbeinantest.txt note: as a normal user to delete beinantest.txt file; rm: cannot remove "beinantest.txt": permission isn't enough then are we going to let the beinan this ordinary users also have the root Super RM deletes skill? root @ localhost ~] # ls-l/bin/rm-rwxr-xr-x1rootroot9387602-1114: 43/bin/rm root @ localhost ~] # chmod4755/bin/rm Note: setting permissions 4755 rm, the setuid bit set. Root @ localhost ~] # ls-l/bin/rm-rwsr-xr-x1rootroot4398002-1114: 43/bin/rm root @ localhost ~] # cd/home/root @ localhost home] # subeinan Note: switch to the beinan user identity; root @ localhost home] $ ls-lbeinantest.txt note: viewing file properties;-rw-r-r-1rootroot004-2418: 03beinantest.txt note: file properties; beinan @ localhost home] $ rm-rfbeinantest.txt note: delete beinantest.txt files; we just set the setuid bit rm, let ordinary users in the RM command has super powers of delete Super root. Through this example, we should be able to understand the setuid and setgid bits of the application, as said earlier, let ordinary users beyond its ability to allow normal users can perform only root can execute commands. At this point, we want to distinguish su and sudo. 5.3setuid and setgid settings method; the first method: octal method: setuid bit is set in octal of 4000, setgid occupy an octal-2000; for example we said chmod4755/bin/rm is the setuid bit set; as regards the method of setting setuid, just as we passed the chmod settings file or directory permissions octal method to insert an extra digit in front, which is 4. e.g. root @ localhost ~] # chmod4755/bin/rm Note: setting permissions 4755 rm, the setuid bit set. As the setgid bits occupy an octal of 2000, we have the following example; root @ localhost ~] # cd/home/root @ localhost # mkdirslackdir home] root @ localhost home] # ls-ldslackdir/drwxr-xr-x2rootroot409604-2418: 25slackdir/root @ localhost # chmod2755slackdir/home] root @ localhost home] # ls-ldslackdir/drwxr-sr-x2rootroot409604-2418: 25slackdir/Linux-based load balancing technology
Preface to the present, both in the corporate network, intranet or in a wide area network such as the Internet, the development of the business volume exceeded the most optimistic estimates of the past, the Internet has rolled out an endless stream of new applications, even if in accordance with the optimal configuration of the network, you will soon feel too much.
Especially in the central part of the network, their data flows and calculation of strength, makes a single device cannot assume, and how to do the same function for multiple network devices to achieve reasonable traffic distribution so that it will not appear one device is too busy, and the other device do not fully exploit the processing power, becomes a problem, the load balancing mechanism also came into being. Load Balancing is built on top of the existing network structure, it provides a cheap and effective way to expand Server bandwidth and increase throughput, enhanced network data processing capacity, improve network flexibility and availability. It is mainly to: resolve network congestion, provide services, location-independent; provides users with better access to quality; increase server response times; increase server and other resource use efficiency, avoid network key parts of a single-point failure. Define actually, load balancing is not a traditional sense of the word "balance", in General, it just may be congestion to one local load to multiple local share. If you change it to "load-sharing", perhaps better understand some. Very popular, and load balancing across a network effect is like taking turns on duty system, the tasks assigned to you to complete, so as not to allow a person to death the fatigues. However, this sense of equilibrium in General is static, that is, to a predetermined "rotating" policy. And take turns on duty system, dynamic load balancing through some of the tools in real time analysis of packets, mastering network data traffic situation, reasonable distribution of the tasks. The structure is divided into local geographical load balancing and load balancing (global load balancing), the former refers to the local server clusters do load balancing, the latter is in a different geographical location, in a different network and server for load balancing between clusters. A server cluster node for each service you want to run an independent copy of the server program, such as Web, FTP, Telnet or e-mail server program. For some services (such as running in the Web server on the service), a copy of the program running on all hosts within the cluster, network load balancing the workload between these hosts. For other services (such as e-mail), only a single host handling workload, for these services, the network load balancing allows network traffic flow to a host, and in the host fails to traffic moved to a different host. Load balancing technology to implement on top of the existing network structure, load balancing provides a cheap and effective way to expand Server bandwidth and increase throughput, enhanced network data processing capacity, improve network flexibility and availability. It is mainly to complete the following tasks: ◆ resolve network congestion problem, provide services, location-independent ◆ provides users with better access to quality ◆ improve server response speed * increase server and other resource use efficiency ◆ avoid network key parts of a single-point failure broadly load balancing you can set up a dedicated gateway, load balancers, or some special software and protocol implementations. On a network load balancing applications, from a network of different levels, depending on the network bottleneck for specific analysis. From the client application as a starting point for the analysis of longitudinal, refer to OSI's tiered model, we have implemented load balancing technology is divided into client-side load-balancing technologies, application server technologies, high-level protocol switching, network access protocol exchange, and other ways. Load balanced hierarchy ◆ client-based load balancing this mode refers to the network client runs a specific program, the program through the regular or occasional collection server farm running parameters: CPU usage, disk IO, memory, and other dynamic information, and then under a selection strategy, find the best available service, the local server application requests made to it. If you load information collection program found server failure, you find other alternative server as a service. The entire process for applications is completely transparent, all work is processed at run time. So this is a dynamic load balancing technology. But this kind of technology there are interoperability issues. Because every client that you want to install this special collection procedures; and, in order to ensure transparency in running the application tier, the need for each application be modified through the dynamic link library or embedded method, the client's access request to go through acquisition program and then sent to the server to redirect the process. For almost every application to the development of the code, the workload is relatively large. Therefore, this technology is only in special applications, such as is used to perform certain specific tasks, more needs to be distributed computing power, the application developer does not have too many requirements. In addition, the use of JAVA framework model, often use this model of load balancing for distributed, because the application is based on java virtual machines, you can at the application layer and the virtual machine design an intermediate layer between the processing load balanced work. ¡Ô application server load balancing technology if the client load balancing layer to an intermediate platform to form a three-tier architecture, the client application may not need to make special modifications, transparent through middle-tier application server will request equilibrium to the appropriate service nodes. Common implementation means that the reverse proxy technology. Use a reverse proxy server, you can even request forwarded to multiple servers, or directly to the cached data is returned to the client, this acceleration mode to a certain extent, can improve the access speed of static Web pages, to achieve load balancing purposes. The benefits of using a reverse proxy is that you can load balance and the proxy server's cache technologyCombined together, provide a useful performance. However it also has some problems, first is a must for every kind of service is specialized in the development of a reverse proxy server, this is not an easy task. A reverse proxy server itself while it is possible to achieve very high efficiency, but for every proxy, the proxy server must maintain two connections, one external connections, an internal connection, therefore particularly high connection requests, the proxy server load is very high. Reverse proxies can perform optimized for application protocol load balancing policy that each time only access most idle internal servers to provide services. But as the number of concurrent connections increases, the load on the proxy server itself has also become very large, and finally the reverse proxy server itself will become a bottleneck. ◆ The domain name system-based load balancing NCSA's extensible Web was among the first to use dynamic DNS polling technology Web system. In the DNS configuration for multiple addresses with a first name, query the name of the client will receive one of these addresses, allowing different clients to access different servers, to achieve load balancing purposes. In many well-known web sites use this technique: including early yahoo site, 163, etc. Dynamic DNS implementation up a simple polling, no complex configuration and management, General support bind8.2 above class UNIX systems are able to run, so widely used. DNS load balancing is a simple and effective way, but there are many problems. First of all a domain name server was unable to know the services node is valid, if the service node fail-over system, continues to be the domain name resolves to the node that caused the user access to fail. Secondly, because the data refresh time DNS TTL (TimetoLIVE) flag, as soon as you exceed the TTL, other DNS servers as needed and this server interaction, to regain the address data, it is possible to obtain a different IP address. Therefore in order to enable the address can be assigned randomly, you should enable TTL as far as possible short, different parts of the DNS server can update the corresponding address, random access to addresses. However the TTL set too short, will allow DNS traffic, but result in additional network problems. Finally, it cannot tell the difference between the server and does not reflect the current running state of the server. When you use DNS load balancing, you must try to ensure that a different client computer can even be different addresses. For example, user A may just browse several pages, and user b may be a large number of downloads, because the domain name system there is no suitable load policy, simply by rotating the balance, it is easy to user A request to load the site, and the light will B requests to load is very heavy site. Therefore, the dynamic properties of dynamic DNS on polling results are unsatisfactory. ◆ High level agreements content exchange technology in addition to the above several load balanced way, there are in agreement with the internal support load balancing capability of the technology, that is, the URL Exchange or seven layer Exchange, provides a high level of access to the flow of control. Web content switching check all of the HTTP header, according to the information in the header to perform a load-balanced decision-making. For example, can this information to determine how to personal home pages, and image data, and other content providers services, common HTTP protocol redirection capability, etc. HTTP runs on top of the TCP connection. Client through constant port number 80 TCP service directly connected to the server, and then over TCP connections to the server sends an HTTP request. Protocol Exchange based content strategy to control the load, and not according to the TCP port number, so no access to the flow of stranded. Due to the load balancing device to access request to multiple servers, so that it can only establish a TCP connection, and HTTP requests through to determine how to load-balance. When a Web site's hits per second over a hundred or even thousands of times, TCP connections, HTTP header information for analysis and process delays have become very important to do everything possible to improve the performance of various parts. In the HTTP request and the header has a lot to load balance the useful information. We can learn from these information in the client that the requested URL and Web pages, the use of this information, load-balancing devices can put all boot image request to an image server, or database queries based on the URL of the CGI program is called the content, the request is directed to a dedicated high-performance database server. If the network administrator is familiar with the content exchange technology, he can HTTP header field to use cookie Web content exchange technology to improve specific customer service, if we can find from the HTTP request, some laws can also make full use of its various decisions. In addition to the TCP connection table problems, how to find the appropriate HTTP header information, and to load-balance the decision-making process, is the impact of Web content switching performance of important issues. If the Web server has SSL for image service, dialogue, database transaction services such as special function has been optimized so that the level of flow control to improve network performance. ◆ Network access protocol exchange large network normally consists of a large number of special technical equipment, such as including firewall, router, 3, 4-layer switches, load balancing, caching servers and Web servers, etc. How to apply these technological devices combine organic land, is a direct impact on network performance of critical issues. Now many switches provide the fourth layer switching function, provides a consistent IP address, and maps to multiple internal IP address on each TCP and UDP connection request, according to its port number, in accordance with the established policy dynamic select an internal address, the packets are forwarded to the address of the load equally. Many hardware manufacturers will this technology integrated in their switches, as they make the fourth layerBuilding the Linux version of Google Chrome browser
Chromium build instructions (Linux version) this page describes if compiled on the Linux operating system build Chromium browser.
If you want to test or chromium chromium to other platforms, porting, you continue reading. Small Tip: there is no Linux running Chromium Chromium browser, although some child module in linux compiled and a small part of the unit test passes, all those are just a command "implemented" alltestspass. prerequisites Note: our idea is that you can in any applicable in modern Linux distributions, and compile build Chromium we try to do our best to list the system build prerequisites. Of course, you can put up with Linux porting only at the beginning of this reality, and we are in the majority of Linux release testing is limited in Chromium, our development platform is Ubuntu8 (hardyheron) of a variety, we want you to be in this system there is a good luck. Linux platform requires the following software to compile build: Subversion > = 1.4 (Tip: If you are using the tarball (compressed format), it is difficult to focus on code changes, you need version 1.5 after we will fix it) (translator: Subversion is a version of than CVS advanced control software) pkg-config > = 0.20 (translator: pkg-config is a development library configuration tool) Python > = 2.4 (translator: Python programming language, in this case, Python environment, tools) Perl > 5.xgcc/g ++ > = 4.2bison > = 2.3 (translator: GNUbison syntax analysis conversion tools?) Flex > = 2.5.34gperf > = 3.0.3libnss3-dev > = 3.12 Ubuntu8 system, you can use the following command once all of the software: $ sudoapt-getinstallsubversionpkg-configpythonperlg ++ bisonflexgperflibnss3-dev gets code 1. Select the compile directory we will in this document that this directory is the variable $ CHROMIUM_ROOT.2. access code library Tools-$ cd $ CHROMIUM_ROOT $ svncohttp://src.chromium.org/svn/trunk/depot_tools/linuxdepot_tools (or download .tar.gz format compressed package files: depot_tools_linux.tar.gz) in order to maintain this compilation instructions of the integrity of the document, we assume that your depot_tools directory in your build directory ($ CHROMIUM_ROOT), but it is not necessary in this way, you can place it anywhere, and in your PATH environment variable, or other variables in the path, 3. because many people on this highly interesting results in our temporary work server cannot access, please try to download the code from SVN snapshots, unzip it and follow the instructions in the upgrade code, you will work through gclient synchronization get the same results-$ cd $ CHROMIUM_ROO $ exportLANG = C # tempworkaroundforgclientbehavior $./depot_tools/gclientconfighttp://src.chromium.org/svn/trunk/src $./depot_tools/gclientsync Tip: default, run gclient sync tool, depot_tools will automatically update the code to the latest version (ready), if you want to turn off this behavior, check the contents of the document page depot_tools. build compiled build current Chromiumlinux subset: $ cd $ CHROMIUM_ROOT/src/chrome $ .. /Third_party/scons/scons.pyHammer in compiled, executable is placed in $ CHROMIUM_ROOT/src/chrome/Hammer Directory problem handling sh: d: notfoundwhileprocessingHammer/webkit/WebCore/xml/XPathGrammar.y you didn't install bison, we are repairing our build scripts to use it more easy to use and friendly, but when you read this document, we alter the code hasn't been updated in a record!Customize your Linux application environment (1)
Author: Cao Jiang hua is based on open source Linux users provide such a platform: depending on your software and hardware environment, customize your Linux application environment.
Thus, according to each user to different application scope custom application environment, you can set the Linux system performance to new heights. Custom Linux system service at boot time, you need to start many system services to local and network users with a Linux system functions interface, directly facing the applications and users. However, unnecessary or vulnerable services will bring security to the operating system and the effect on performance. For system security, if any of the operating system vulnerabilities that are likely to make the entire system. Therefore, increase system security, the best way is to monitor the system's functionality. As regards the choice of the number of services and features, according to the needs and capacity to work. The following run as root: # ntsysv figure 1 does not need to service and process preceding * remove open as shown in Figure 1, where you can turn on/off for every system service (for example by RedFlag3.0). You will not need to service and process preceding * removed (with the SPACEBAR), and then restart the system so that you can make the unneeded services and process no longer start. This way you can at any time in accordance with the needs of the customization system services, guarantee safety, but also can improve system performance. To protect the Linux to work, some system services must start, for example, the crond, syslog, keytable, nfs, kudzu. In order to efficiently and securely customize system services, the following items describe the system service functions. AlsasoundAlsa audio driver support. Alsa sound drivers would have to a sound card GravisUltraSound (GUS) and write it and OSS/Free and OSS/Linux compatible. Apmd used to monitor the system power state, and the related information write through syslogd logs can also be used to shut down the power supply is low. Generally used for laptops, desktops, recommends to shut down. Atd with At command schedules tasks, also in system load is relatively low when running the batch job. Auto-autofs reproduced when needed, the file system does not automatically uninstall when needed. Chargen port chargen-character abbreviation, the output produced by one of the printable characters, rotating sequence for testing character terminal equipment. Chargenudpudp format port of abbreviation chargen character, output a printable characters of the rotation sequences for test character terminal equipment. Crond according to user requirements cycle to run scheduled tasks. It is more secure, easy configuration, similar to the Windows scheduled tasks. Dhcpd provide dynamic host control protocol (DynamicHostControlProtocol) access supportive. Echo port to respond to all echo simple to test the connection to its data. Echoudpudp format port echo simply responding to all to test the connection to its data. Gpm to text mode Linux programs, such as MC (MidnightCommander) provides mouse support. It also supported the console mouse copy, paste and the pop-up menu. Inetd Internet operations services program. Monitor network management service needs and, where necessary, start the appropriate service process. Typically, the management of the program have inetd telnet, ftp, rsh and rlogin. Close the inetd will turn off these services by its management. It is a famous http WWW server, can be used to provide HTML files and CGI dynamic content services. Isdnisdn daemon. The program features keytable is reproduced in the notes of the keyboard/etc/sysconfig/keyboards mapping table. The table can be selected by kbdconfig tool, you should make the program is activated. Kudzu is a hardware detection program, and the Windows of the add new hardware. If the system core support the hardware and the drivers, you can automatically mount. Linuxconf it is Linux a valid system configuration tool that allows remote operation. Linuxconfweb in Web mode use linuxcon. Lpd system printing daemon, responsible for lpr, the program is submitted to the print job. Medusa Web-browser. Mysql is a fast and efficient reliable small SQL database engine. Ntalk allows the user to your own computer and other computers, and then move forward or backward to transfer information. Netfs is responsible for loading/unloading NFS, Samba, NCP (Netware) file system. Network to activate/disable startup of each network interface. Nfs is a popular, based on the TCP/IP network file sharing protocol. This service provides NFS file sharing services, the specific configuration file in/etc/exports. Nscd this service is responsible for passwords, and group queries, and buffering the results of the query. If the system has a slower service (such as NIS and NIS +), you should start the service. Pxe for remote diskless linux system launch services program. Pcmcia is used primarily to support the laptop. Rexec it is a secure, decentralized implementation of remote systems to parallel computer clusters and run continuously. Random save and restoreSystem of high-quality random number generator. These random numbers are some random behavior by the system. Routed the daemon supports RIP protocol automatically the IP routing table maintenance. RIP the main use in small networks, the larger the network requires complex protocols. Rsync superimposed FtpServer, allows the cycle check. Rsh remote host starts a shell, and perform the user command. Rwhod allow remote user access rwho daemon running on the machine all the logged on user list, and finger. A toolkit, swatSamba use 901 port. Sendmail mail server. Smb boot and shutdown smbd and nmbd daemon to provide SMB network services. Snmpd simple network management protocol (SNMP) daemon. Syslog is the operating system provides a mechanism for daemons usually use this mechanism to all kinds of information written to the system log file. Usually you should start the service. XfsX-Window system font server. Xinetd is inetd inherited service, monitoring network to various management service needs and, where necessary, start the appropriate service process. These system services, the security risk is quite big: rsh, rwhod, rexec, snmp, named, sendmail. For a real need of system services, should try to use the latest version of the program, and other security precautions. In addition, many Linux product in the system after initialization by default start X-Window manager. If you compile your program or edit the configuration file, then start X-Window manager will consume a large amount of system resources. Disable X-Window manager by editing the file, locate the content/etc/inittab to ID: 5: initdefault line, change it to ID: 3: initdefault after system restart will provide command line login. When you need to run X-Window manager, just enter startx. Depending on your hardware to optimize 1.CPU CPU is Linux hosts core hardware, according to CPU type at compile time, optimized for best performance. In/etc/profile file, containing system environment and start the program configuration information, use-O9 to compile, run speed is the fastest. Compile-time use-fomit-frame-poinetr option, the program run-time access to variable uses the stack. Use the-mcpu = cpu-type and-march = cpu-type, gcc will be optimized for CPU models. If the CPU is PentiumPro, P e n t I u m e n Ⅱ, P t I u m Ⅲ, AMDK6-2, K6-3, Althon, "adding:"/etc/profile CFLAGS = '-O9-funroll-loops-ffast-math-malign-double-mcpu = pentiumpro-march = pentiumpro-fomit-frame-pointer-fno-exceptions ' if the CPU is the Pentium, PentiumMMX, AMDK5, IDT, Cyrix, "adding:"/etc/profile exportCFLAGS =-O3-march = pentium-mcpu = pentium-ffast-math-funroll-loops-fomit-frame-pointer-fforce-mem-fforce-addr-malign-double-fno-exceptions 2. hard drive using UDMA/33, 66, 100, 133 technology hard drive, the maximum transfer rate is 33MB/s, 66MB/s, 100MB/s, 100MB/s. Theoretically, it is an IDE hard disk (this refers to the transfer rate PIOMODE4 mode is 16.6MB/s) transfer rate of 3 to 6 times, but in Linux default settings, the DMA is disabled, so it must be open. We can use/sbin/hdparm program to open it. Hdparm in some common options are as follows:/sbin/hdparm-c1/dev/hda or hdb or hdc and so enable PCI bus on a 32-bit I/O mode to transfer data. /Sbin/hdparm-d1/dev/hda enable DMA mode for data transmission. /Sbin/hdparm-d1-X66/dev/hda UltraDMA mode data transfer is enabled. Gets the current hard disk drive in the system settings list is (as root) type: $/sbin/hdparm/dev/da then enter the command: $/sbin/hdparm-kl/dev/da after Reset hard drive to remain above settings. Everything after the optimization to the best State, the command Add to/etc/rc.d/rc.local file, so that the commands in each time you run automatically when the system boots. 3. memory in Linux using free you can observe the memory usage. If you find that Linux can only use one part of it, then in/etc/lilo.conf to append = "mem = XXX", where XXX can be physical memory capacity. This way you can tell Linux use all memory. IfFruit on the calculation speed is high, can be achieved by increasing the memory to use ramdisk technology. A ARamDisk is assumed to be a hard drive, memory, and it stores the file. Suppose there are several files to frequent use, if they are added to the memory, the program will greatly increase the running speed, because the memory read/write speeds much higher than the hard disk. Set aside some memory to improve overall performance, no more than to replace the new CPU. Such as the Web server computer, require a large amount of read and Exchange-specific file. Therefore, on the Web server, create RamDisk would significantly improve network read speed. $ Mkdir/tmp/ramdisk0 $ mke2fs/dev/ram0 $ mount/dev/ram0/tmp/ramdisk0 above these three commands will RamDiATL Server and asp.net
The Web server's task is to accept an incoming HTTP request, and return to the calling party information (whether or not the caller is a person, or a Web service of the machines).
Windows contains a processing request mature structures — IIS and its related extensions. However, IIS is scratch design is very tedious and error prone. Manage HTTP request a better approach is to use is located at the top of the IIS a framework to manage. This month I will compare two ways to create WindowsWeb application's primary technology: asp.net with ATLServer. Each frame has some specific advantages and disadvantages. In this section, I will focus on how to use asp.net management Web-based UI. Next time I will focus on how to use the framework to build Web services and other features. Asp.net? asp.net is to handle HTTP requests and design class libraries. In addition to the class library, asp.net also contains several management requests IIS components. These components include the named ASPNET_ISAPI.DLL ISAPIDLL and known as the ASPNET_WP.EXE worker process. Asp.net also in IIS is installed on the new map, ASPX, ASCX, ASHX and ASMX file redirects the request to ASPNET_ISAPI.DLL. At this point, the request is redirected to the ASPNET_ISAPI.DLL ASPNET_WP.EXE, in the process, load the required asp.net class to provide the service requested. To name the HttpContext of the managed class as the Center, there is a very convenient asp.net's object model. If you wrote a standard ISAPIDLL, be sure to understand the EXTENSION_CONTROL_BLOCK and passed to the HttpExtensionProc ISAPIDLL. Management request, this structure ISAPIDLL to obtain such as environmental ID, query strings, as well as reading and writing client-side functions, and so on. Asp.net will all of this information is packaged in the HttpContext class. Asp.net also includes management of Web-based UI (through the System.Web.UI.Page class) and the management Web service (through the System.Web.Services.WebService [WebMethod] classes and attributes) of the basic class structure. Asp.net is object oriented. Each pass through asp.net application requests are called the IHttpHandler class, the class can implement interfaces. This will bring up a highly scalable architecture. You can choose to use asp.net page architecture or the Web services architecture, or you can also write the processing logic from scratch. Figure 1 shows the path taken by asp.net requests. ASP.NETUI-processing architecture to web.ui.page class (the ASPX file) as the Center. ASPX files can include neat HTML code and server-side control. When asp.net encounters a server-side controls (via the "runat = server" property), it will instantiate a class that represents the control (for example, a Button and ListBox controls). In essence, asp.net page as these server-side control tree to handle (the text on the page and the tag is packaged as LiteralControl class) — other are server-side control. When asked to render the page itself, the page just loop through the control tree, "tell" the tree of each node render themselves. ATLServer operates somewhat differently. ATLServer? ATLServer is used to create ISAPIDLL c++ template library. When you first create the IIS, the developer must from scratch or from MFC ISAPIDLL class as a starting point to write an ISAPI extension. Use the original c++ or MFC code generation ISAPIDLL need manual extension code. For example, there is no MFCISAPI provides developers with form-based architecture. Any client end of the HTML tag must be artificial to sent. ATLServer will be based on the form and run-time speed and flexibility of the C + +. Construction with ATLServer Web site consists of three basic components: a server response file (SRF), application DLLs and prerequisite of the ISAPI extension. SRF is ATLServer installed in IIS in a new file. SRF maps will point to the application's IIS ISAPIDLL, which in turn will handle points to one or more application DLLs. SRF includes a special new markup syntax, which is essentially in the application DLL entry point is called. Figure 2 shows the request based on ATLServer through system used by the path. ATLServer application consists of many DLL (ISAPI extension and application extension) and is called the SRF (mentioned earlier) HTML generated templates. This architecture has made it clear that the application and the separation of application logic. Web page that contains HTML and special tags to define the SRF. These tags called ATLServer application DLLs. Since most Web application's target platform is Windows, therefore ATLServer application built on ISAPIDLL. ATLServer project contains a rough requests can be processed in a single ISAPI extension DLL。 ATLServer application also contains used for a variety of precise request processing by one or more application DLLs. Processing requests for classes derived from CRequestHandlerT and also contains your own unique, used to handle the SRF tag in the code. The handler class that contains the request handler class and the request handler DLL to associate, and overrides the method and associate SRF tag dictionaries. In addition to the alternative dictionaries, CRequestHandlerT also contains access standard Web application element methods and member variables, for example, form variables, cookies, request stream, as well as the response stream. When the browser find .srfURL via HTTP, IIS know ISAPIDLL ATLServer application to open the site. ATLServer application then open the SRF, and loads the application DLL (if they haven't loaded). The application then the request is sent to the default request handler class that the SRF is parsed, in search of a special token markup. Each time the SRF tags appears in, the application calls the existence to a particular application DLL a handler class of alternative methods. The alternative method to dynamically generate the browser's output. Figure 2 shows a ATLServer application path of the request. ATLServer and asp.net although ATLServer and asp.net are ISAPI-based architecture, but they are processing requests in very different ways. To illustrate these differences, let's look at a sample application, the application collects personal name and his or her development preferences. I will explain how to develop user interface and how to use session state. In the next part of this column, I will examine some of the other features (such as caching and browser capabilities), as well as each frame on the Web service works. Figure 3 compares the two frameworks of some functionality. ASPX files and SR asp.net and ATLServer as ISAPI architecture introduces a new file name extension. Asp.net into file types are ASPX, ASMX, ASCX, ASHX, there are some other type. In the framework of asp.net file types each with a corresponding managed type. For ASPX files, this is the System.Web.UI.Page class. The Page class is responsible for rendering Web pages based on the UI. Figure 4 shows a simple ASPX file. With regard to the ASPX file it should be noted that the main issues are, at the top near the inherited instructions, and buttons, labels, and drop-down list labeled "runat = server" property. This mark indicates that the server-side controls in VisualStudio in code-behind file of the corresponding class. By contrast, the SRF includes most standard, the most common HTML tags. ATLServer no server-side component architecture. However, it introduces the concept of server response markers. These special tags with double braces ({{}}). When ATLServer request architecture experience a server response markers, it expects the application to find a corresponding DLLÖÐ handler function. Figure 5 shows a simple SRF, it displays the user interface and figure 4ASPX example of roughly the same as the user interface. This file need to be aware of the most important issues is by double braces enclose the special tags. For example, near the top of the SRF, you see the specify application default handler handler tag ({{handler ...}}). This tells ATLServer application loads a DLL, to find the function called in response to the tag. Other response markers pointed out that the application DLL entry point. "Intrinsic" object asp.net and ATLServer contains similar intrinsic request and response objects, and typical ASPÒ» Ñ ù. in asp.net, their HttpRequest and HttpResponse class used to represent. In ATLServer, they use the CHttpResponse and CHttpRequest class to represent. In each frame, their uses are basically the same. The request object encapsulates such as request parameters and URL, and other items. Response object contains the text output to the client. For example, in order to provide a requested based on asp.net output stream into the "HelloWorld", just call to Response.Write, as follows: protectedvoidHelloWorld () {Response.Write ("HelloWorld"); } To ATLServer application based client output "HelloWorld", use the CHttpResponse object, as follows: [tag_name (name = "Hello")] HTTP_CODEOnHello (void) {m_HttpResponse returnHTTP_SUCCESS; Pay attention to how to use} Figure 5 server response flags to call OnHello function. (This tag as follows: Hello. ) To manage the UI elements each frame uses a different method to manage the UI elements. As previously mentioned, asp.net UI support around server-side control model. Represents a code (the ASPX file) using the runat = "server" attribute declared on a Web page, the server-side elements. As long as the code-behind class soundClear the corresponding classes to programmatically access the control easily. For example, the code in Figure 4 illustrates several server-side control elements (for example, the submit button, the text element, and drop-down list). The code-behind page Button, TextBox and DropDownList class declared as members of the page so that you can program using these UI elements. In order to find the data in the TextBoxName element, only may need to access the Text property, as follows: Strings = this.textboxname.text; Asp.net there are server-side control to automatically track the view state. When a browser and from the access server, the UI elements (as, list boxes and radio buttons) will keep them in a consistent state. For example, the last time in the drop-down list box, select the item that is displayed. There is no need to write any special code to ensure that the control's behavior is correct. ATLServer no such control model. Only through server response token management UI. In order to populate the drop-down list, a sample with ATLServerTroubleshooting Linux operating system panic treatment methods
(B) client configuration: (1). check whether the client is installed, if not installed rpm-qnetdump in CD RedFlag/RPMS/directory to find the start package, netdum execute the command: rpm-ivhnetdump-x.x.x.rpm (x is the version number) installed.
(2). edit file/etc/sysconfig/netdump, add the following line: DEV = et0 NETDUMPADDR = 172.16.81.182 NETDUMPMACADDR = 00: 0C: 29 79: F4: E0 172.16.81.182 means netdump-server address. (3). running the following command, enter the password when prompted: servicenetdumppropagate (4). the open client: chkconfignetdumpon (5). to run the client: servicenetdumpstart (6). the test to test the netdump configuration is correct, the netdump client to do the following: cp/usr/share/doc/netdump-xxxxxx/crash.c. Gcc-DKERNEL-DMODULE-I/lib/modules/$ (uname-r)/build/include-ccras.c insmod./cras.o this will crash the system, you will <客户端ip>/var/crash/netdump-Server/directory to see a core dump. 客户端ip> When the client is to dump the data to the server, you will see a "vmcore-incomplete" file. When a dump is complete, the file is renamed into "vmcore". "Vmcore" file size will change, you may reach several GB in a memory is 512M system, the above test will produce approximately 510M of vmcore file. How to determine whether the network card support the netdump? kernel debugging tool netdump need the drivers support netpoll function. Netpoll is intended to enable the kernel on a network and i/o subsystem is not yet completely available, still be able to send and receive packets. Mainly used for the network console (netconsole) and remote kernel debugging (KGDBoE). Implementation, mainly netpoll is to implement kernel in poll_controller function which defines: void (* poll_controller) (structnet_device * dev). This function is used in the missing device interrupts, can respond to the controller. Almost all poll_controller functions are defined as follows: voidmy_poll_controller (structnet_device * dev) {disable_device_interrupt (dev); call_interrupt_handler(dev->irq,dev); enable_device_interrupt(dev); }Use SELinux and Smack reinforced lightweight containers
By SELinux protection of containers in container using SELinux policy contains one policy module, this module has been published to refpolicy--SELinuxReferencePolicy development mailing list.
Will this policy are downloaded to/root/vs directory vs.if, vs.fc and vs.te file. Like this to compile and install a new module: cpvm.imgselinux.imgcpvm.imgsmack.img then use lxc-debian create/vs1and/vs2 container and use mkdir/vs1; cd/vs1lxc-debiancreatecontainername: vs1hostname: vs1address: 10.0.2.21gateway: 10.0.2.2arch: 2 (i386) mkdir/vs2; cd/vs2lxc-debiancreatecontainername: vs2hostname: vs2address: 10.0.2.22gateway: 10.0.2.2arch: 2 (i386) fixfilesrelabel/vs1fixfilesrelabel/vs2 Relabel the file system. When you start a container (for example by using the command lxc-start-nvs1), are likely to receive some SELinux audit messages to access denied. But don't worry — the container will start normally, and will enable a network service and isolated containers. If you start the container before use mount--bind//vs1/rootfs.vs1/mnt helps to camouflage the container vs1, you'll find that even the root user, it reuses the ls/mnt/root. In order to understand the principle, we look at vs.if interface file. This document defines an interface called a container, it takes one argument (that is, the container will be defined by the base name). Vs.te files use the container name vs1 and vs2 two calls to this function. In this interface, $ 1 is extended to this parameter, so when we call the container (vs1), $ 1_t becomes vs1_t (start here, assume that we are defining a vs1). Contains the contents of the row vs1_exec_t is most important. This container to run vs1_t type. When the execution of the container 's/sbin/init unconfined_t (type vs1_exec_t), it will enter this type. The remainder of the policy is to grant the privileges of full containers, in order to access the various parts of the system: network ports, devices, and consoles, etc. The interface is very long, this is a reference from the current SELinux policy fine granularity characteristics. As we'll see as Smack protection container has a simpler strategy; however, it is a system service behavior when the flexible protection provided is much less. There is one thing to do. It should be noted that, although the container cannot override it by $ 1_exec_t (i.e./sbin/init), but it can execute mv/sbin/sbin.bakmkdir/sbin/init sbintouch/generated vs1_file_t/sbin/init is of type. The container administrator why do I need to do this? "because it will start in unconfined_t domain container, including sshdaemon, enable him to access privileged shell, and the ability to bypass our SELinux restrictions to be implemented. To avoid this, you need to actually start using custom scripts, and launch the container the container before the sbin/init relabeled as vs1_exec_t. In fact, if the container administrators don't mind, can be an original copy of the replicated back to init the container and Relabel it. But we just Relabel existing init: cat > >/vs1/vs1.shUse sudo reinforcement Linux system security
II. detailed description of the use of Sudo Sudo command parameters, we can let a user as root to execute some commands, you can let him as another user to execute some commands — this is especially useful for system administration.
Sudo command specific configuration can be found in the file/etc/sudoers the document provides a command can be executed for a specific user. Prerequisites for using sudo is that users must have their own user name and password. If a user attempts to run through the sudo command, but the user did not located in the sudoers file, the system will automatically send the admin an email stating that non-authorized users are accessing the system. As mentioned earlier, because the function has a ticket, sudo user login sudo, issued him a piece of paper, by default the effective duration of five minutes. However, users can also sign with-v sudo command to update bills, this will apply for another five minutes of negotiable instruments. The command looks like this: sudo-v If you have an unauthorized user to run above command, the administrator will receive an email to reflect the event e-mail messages. At the same time, the mark-v can also be notified of non-authorized user, who is an illegal user. If the user is stubborn, and again enter the sudo command, the system will then send you an e-mail message to notify the administrator. Regardless of the logon attempt is successful, Sudo will be recorded into the default syslog (3) files. However, we also can Sudo's configuration file to change this behavior. The following table gives the sudo command options. Option option name description-VVersion print version number and exit. -Help print help message and exit. -LList lists all current user to allow and ban command. -The user's notes vValidate updated to a preconfigured amount of time, the default is five minutes. If necessary, the user must enter the user password again. -Set aside that user kKill. Implementation of this option will command the user reenter the user password to update the ticket. -Completely removed the user KSurekill. After you run this option, users must use their username and password to sign in. -UUser as user name provided by the user to run specific commands. The user name provided by the user can be any user other than root. If you want to import a uid, entry # uid instead of the username. If you want to use uid, can be used to replace the username uid #.Wednesday, March 9, 2011
Embedded systems and system-level programmable data sheet
3 examples of embedded system applications due to its own characteristics of embedded system once the industry market, will have a longer life cycle.
Embedded system in the application number is far more than the General computer. Embedded systems are widely used in communications, Internet, finance, aerospace, aviation, consumer electronics, military equipment, instrumentation, manufacturing control, it will apply as follows: industry application project finance, Commercial Bank ATM machines, self-service terminal, POS queries, and other communications, networking: firewall, VPN, VoIP, PBX, such as aerospace, aviation GPS: Aerospace controllers, rocket guidance, radar imaging, manned spacecraft simulation test consumer electronics: TV, VCRs, camera, security systems, video game consoles, PDA, ticket machines and other military equipment: military communication laptop, car gun controller, instrumentation, medical: signaling Analyzer, ECG measurements, color B Ultra, CT, manufacturing control system: big oil, coal, nuclear power plants, steel mills, port control and so 4 platform-based design of embedded systems-embedded intelligent platform [1] 4.1 embedded intelligence platform to produce in response to intense competition for embedded systems, accelerating the pace of market access to products, shorten product development cycles, some embedded system manufacturers and semiconductor chip manufacturers for embedded system applications and industry specific needs has introduced a variety of embedded intelligence platform (EIP). 4.2 EIP technology EIP (EmbeddedIntelligentPlatform, embedded intelligent platform) is a computer and information technologies development. General computer with a standard form of the computer, by configuring different application software to the same application in the re-launched and social aspects, its typical products to PC and embedded computer usually means some sort of industry standard single board computer, and that can adapt to a variety of work environments, the application can be embedded into more products. Therefore the embedded computer is has more variety, characteristics of products in small batches, industry into a higher threshold. At the same time as it is no longer with the computer's standard form, so some mainstream industrial control computer manufacturers prefer to define it as embedded intelligent platform (EmbeddedIntelligentPlatform), hereinafter referred to as EIP. The composition of the EIP 4.3 EIP General contains the processors, cache, FLASH, DRAM controller, DMA (direct memory access DirectMemoryAccess-) controller, PCI (PeripheralComponentInterconnect: external device interconnection) controller, etc., as well as a development system. Users use development system provides the programming of development tools, on the basis of the EIP to add specialized functionality or application-specific requirements for system optimization and performance assessment, developing embedded in products, installations or large system of embedded computer systems. The introduction of embedded intelligence platform reduces the general difficulty of development for embedded systems, development costs, shorten product development cycles. The processor is at the core of the EIP, common processor from single-processor or dual-processor. Based on packet network technology (such as VoIP-VoiceoverInternetProtocol and based on digital subscriber line of speech technology) network application of EIP, for example, the unitary structure of the processor is very difficult to meet these needs in complex network processing tasks while combined RISC (reduced instruction set) processor and is used to signal detection and noise elimination of digital signal processor (DSP) of mixed structure can meet these requirements. A typical VoIP system uses a RISC processor for network protocol processing, do one of the bits and bytes of actions to accomplish baotou and routing information to decode, data processing order and packaged, marked so that packets can correctly implemented sound playback. At the same time, the system also needs a DSP for echo cancellation and noise suppression and speaking party's tone detection. In the information package before the DSP performs silence detection and voice compression, and decode and decompress the reverse operation. Meet the diverse requirements of traditional programmes is a dual processor. RISC processor is used to connect digital network, used for protocol processing engine and Network Manager, but also to deal with the main system control and user interface; DSP and pulse modulation coding interface, used for speech compression and decompression functions. Although the above two-processor scenarios principle entirely feasible, but the most obvious disadvantage is that two processors increased costs, as each processor has its own memory and peripheral devices, each processor also requires its own design tool chain, as well as near-independent software development work. In addition, the two Exchange of data between the processor, you must use some common dual-port RAM (random access register) or FIFO (first in, first out) memory for two buffers between processors, or a complex software Handshake Protocol controls the exchange of data. Moreover, the programme for some small applications would be overkill. Network data rate determines the RISC processor capable of running and maintaining minimum clock speed, but in fact, the single-channel data processing most time RISC processor are in an idle state. In a large system, RISC processors can handle multiple channels of data flow, and each channel has its own DSP, in small-scale systems, redundant RISC processor performance is wasted, because it requires data processing also cannot make full use of DSP performance. So on some small application has a single processor scenario: in RISC processor DSP functionality on extended; also on extended RISC DSP functions; there are customized for the application (using ASIC technology), butThis approach is time-consuming, technical requirements, and expensive. In order to give the best performance to price ratio, at present, there is a scalable processor. The main advantage of scalable processor is to use the unified memory and single software development environment, than the traditional dual-processor design with lower costs. In addition, due to the internal implementation of custom, which allows the user to extend the application of specific directives, this kind of processor also can implement intellectual property protection. However, before you choose to adopt extensible processors, design engineers must be fully aware of the instructions to add what groups to take full advantage of the benefits of this structure. At present such processors including Tensilica's Xtensa and ARCInternational company ARCtangent. 4.4 EIP industry development and features · full of opportunities and challenges of embedded intelligence platform industry rapid development in recent years, according to the United States leadership in embedded computer manufacturers WinSystem in RTC Professional Magazine predicts that in the next 10 years embedded intelligence platform market will have 10 times the desktop computer market opportunities. IDC forecasts, the annual growth rate of embedded intelligence platform will amount to 15%, in 1998 the embedded intelligence platform market size is 126.5 billion in 2000, the market size is 250 billion in 2001 to 311 million. You can tell that there is a need to embed intelligence processing, computing applications are embedded intelligence platform market · technology continuously update., EIP EIP is today's global information industry's top, is the twenty-first century computer into life. Currently, there is no alternative to the more advanced technology and products, the industry's technology updates quickly, mainly with computer technology, VLSI technology, software, technology daily, extended application areas, new EIP products are continuously being developed and rapidly to market. · trade barriers to higher development EIP products need not only to master computer, communications and software, and other aspects of key technologies, have a wealth of product development, production management experience, but also to the product of the target industry knowledge to have profound understanding. These factors make as EIP market competitors, to experience long-term development and production of the gradual accumulation of management practice, rich experience in industrial applications, in order to form a competitive advantage. Therefore the industry with high barriers to entry. · selected key components that still need to be dependent on imports for China's EIP manufacturers. At present, China's EIP in the manufacture of some key components such as microprocessors and VLSI, etc. will still be dependent on the United States, Japan and other manufacturers supply. This kind of spare part market supply and demand and price fluctuations are a direct result of the EIP industry. As China joins WTO, IC imported duty-free will help reduce product costs. · China EIP industry embedded intelligence platform is the current national focal point support information industry. In recent years, the State Department and relevant ministries have enacted some of the industry in promoting the EIP has important policy documents, including: the information industry "tenth five-year plan" outline ", the current state to encourage the development of industry, product and technology catalog", "encourages software and IC industry development policy", "computer and network product" ten, five "special program" and "instrument industry" ten to five "plan", etc. At the same time, the State will use high-tech and advanced technology to transform traditional industries, including electronic information technology applied to traditional industries, improve the production process is automated, the intelligent control and management information level; to the application of advanced manufacturing technology, promote the manufacturing of high quality and high production areas, the revitalization of the equipment manufacturing industry; upgrading key industries of key technologies, common technology and related skills, technology and equipment level. These are to promote the development of industry and market EIP positive.