Course: Library Automation, Information Storage & Retrieval-I (5643)
Level: MLIS Semester: Autumn, 2021
ASSIGNMENT No. 2
Q.1 Define Software. How system software is different from application software? Discuss uses of the MS-PowerPoint software in libraries.
Ans
Software is a collection of instructions that tell a computer how to work.[1][2] This is in contrast to hardware, from which the system is built and actually performs the work.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to “jump” to a different instruction, or is interrupted by the operating system. As of 2015, most personal computers, smartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages.[3] High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has a strong correspondence to the computer’s machine language instructions and is translated into machine language using an assembler.
Following are some basic differences between System software and Application software. System Software. Application Software. System Software is the type of software which is the interface between application software and system. On other hand Application Software is the type of software which runs as per user request.
System software is software designed to provide a platform for other software. Examples of system software include operating systems (OS) like macOS, Linux, Android and Microsoft Windows, computational science software, game engines, search engines, industrial automation, and software as a service applications.[1]
Application software is software that allows users to do user-oriented tasks such as create text documents, play or develop games, create presentations, listen to music, draw pictures or browse the web.[2]
In the late 1940s, the early days of computing, most application software was custom-written by computer users to fit their specific hardware and requirements. System software was usually supplied by the manufacturer of the computer hardware and was intended to be used by most or all users of that system.
Many operating systems come pre-packaged with basic application software. Such software is not considered system software when it can be uninstalled without affecting the functioning of other software. Examples of such software are games and simple editing tools supplied with Microsoft Windows, or software development toolchains supplied with many Linux distributions.
Some gray areas between system and application software are web browsers integrated deeply into the operating system such as Internet Explorer in some versions of Microsoft Windows, or Chrome OS and Firefox OS where the browser functions as the only user interface and the only way to run programs (and other web browsers can not be installed in their place).
Cloud-based software is another example of systems software, providing services to a software client (usually a web browser or a JavaScript application running in the web browser), not to the user directly. It is developed using system programming methodologies and systems programming languages.
An application program (application or app for short) is a computer program designed to carry out a specific task other than one relating to the operation of the computer itself,[1] typically to be used by end-users. Word processors, media players, and accounting software are examples. The collective noun refers to all applications collectively.[2] The other principal classifications of software are system software, relating to the operation of the computer, and utility software (“utilities”).
Applications may be bundled with the computer and its system software or published separately and may be coded as proprietary, open-source, or projects.[3] The term “app” often refers to applications for mobile devices such as phones.
Microsoft PowerPoint is a presentation program,[6] created by Robert Gaskins and Dennis Austin[6] at a software company named Forethought, Inc.[6] It was released on April 20, 1987,[7] initially for Macintosh computers only.[6] Microsoft acquired PowerPoint for about $14 million three months after it appeared.[8] This was Microsoft’s first significant acquisition,[9] and Microsoft set up a new business unit for PowerPoint in Silicon Valley where Forethought had been located.[9]
PowerPoint became a component of the Microsoft Office suite, first offered in 1989 for Macintosh[10] and in 1990 for Windows,[11] which bundled several Microsoft apps. Beginning with PowerPoint 4.0 (1994), PowerPoint was integrated into Microsoft Office development, and adopted shared common components and a converged user interface.[12]
PowerPoint’s market share was very small at first, prior to introducing a version for Microsoft Windows, but grew rapidly with the growth of Windows and of Office.[13]: 402–404 Since the late 1990s, PowerPoint’s worldwide market share of presentation software has been estimated at 95 percent.
Q.2 Discuss functions of input and output devices of a personal computer.
Ans
The functioning of a computer system is based on the combined usage of both input and output devices. Using an input device we can give instructions to the computer to perform an action and the device reverts to our action through an output device.
In this article, we shall discuss the various input and output devices which can be connected to a computer, along with their functions. Also, some sample questions based on this topic have been given further below in this article.
This topic is not just important to have a basic awareness of how a computer device works but is also equally important for competitive exam aspirants. This is because Computer Knowledge is a part of the syllabus for major Government exams conducted in the country.
Let us first discuss the exact definition of an input and output device:
Input Device Definition: A piece of equipment/hardware which helps us enter data into a computer is called an input device. For example keyboard, mouse, etc.
Output Device Definition: A piece of equipment/hardware which gives out the result of the entered input, once it is processed (i.e. converts data from machine language to a human-understandable language), is called an output device. For example printer, monitor, etc.
List of Input Devices
Given below is the list of the most common input devices along with brief information about each of them.
- Keyboard
- A simple device comprising keys and each key denotes either an alphabet, number or number commands which can be given to a computer for various actions to be performed
- It has a modified version of typewriter keys
- The keyboard is an essential input device and computer and laptops both use keyboards to give commands to the computer
- Mouse
- It is also known as a pointing device
- Using mouse we can directly click on the various icons present on the system and open up various files and programs
- A mouse comprises 3 buttons on the top and one trackball at the bottom which helps in selecting and moving the mouse around, respectively
- In case of laptops, the touchpad is given as a replacement of the mouse which helps in the movement of the mouse pointer
- Joy Stick
- It is a device which comprises a stick which is attached at an angle to the base so that it can be moved and controlled
- Mostly used to control the movement in video games
- Apart from a computer system, a joystick is also used in the cockpit of an aeroplane, wheelchairs, cranes, trucks, etc. to operate them well
- Light Pen
- It is a wand-like looking device which can directly be moved over the device’s screen
- It is light-sensitive
- Used in conjunction with computer’s cathode ray tube
- Microphone
- Using a microphone, sound can be stored in a device in its digital form
- It converts sound into an electrical signal
- To record or reproduce a sound created using a microphone, it needs to be connected with an amplifier
- Scanner
- This device can scan images or text and convert it into a digital signal
- When we place any piece of a document on a scanner, it converts it into a digital signal and displays it on the computer screen
- Barcode Reader
- It is a kind of an optical scanner
- It can read bar codes
- A source of light is passed through a bar code, and its aspects and details are displayed on the screen
All the devices mentioned above are the most commonly used input devices. Several other such types of equipment are used in different fields which can be counted as an input device.
For the reference of candidates and to help them with the common computer-related terms, given below are links to difference between articles which will make these terms easily understandable:
List of Output Devices
The commonly used output devices have been listed below with a brief summary of what their function is and how they can be used.
- Monitor
- The device which displays all the icons, text, images, etc. over a screen is called the Monitor
- When we ask the computer to perform an action, the result of that action is displayed on the monitor
- Various types of monitors have also been developed over the years
- Printer
- A device which makes a copy of the pictorial or textual content, usually over a paper is called a printer
- For example, an author types the entire book on his/her computer and later gets a print out of it, which is in the form of paper and is later published
- Multiple types of printers are also available in the market, which can serve different purposes
- Speakers
- A device through which we can listen to a sound as an outcome of what we command a computer to do is called a speaker
- Speakers are attached with a computer system and also are a hardware device which can be attached separately
- With the advancement in technology, speakers are now available which are wireless and can be connected using BlueTooth or other applications
- Projector
- An optical device which presents an image or moving images onto a projection screen is called a projector
- Most commonly these projectors are used in auditoriums and movie theatres for the display of the videos or lighting
- If a projector is connected to a computer, then the image/video displayed on the screen is the same as the one displayed on the computer screen
- Headphones
- They perform the same function as a speaker, the only difference is the frequency of sound
- Using speakers, the sound can be heard over a larger area and using headphones, the sound is only audible to the person using them
- Also known as earphones or headset
Q.3 Explain the term database. Discuss two online databases in detail.
Ans
In computing, a database is an organized collection of data stored and accessed electronically from a computer system. Where databases are more complex they are often developed using formal design and modeling techniques.
The database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS software additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a “database system”. Often the term “database” is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Computer scientists may classify database-management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, referred to as NoSQL because they use different query languages.
Formally, a “database” refers to a set of related data and the way it is organized. Access to this data is usually provided by a “database management system” (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term “database” is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.[1]
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
- Data definition– Creation, modification and removal of definitions that define the organization of the data.
- Update– Insertion, modification, and deletion of the actual data.[2]
- Retrieval– Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.[3]
- Administration– Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.[4]
Both a database and its DBMS conform to the principles of a particular database model.[5] “Database system” refers collectively to the database model, database management system, and database.
Online database
An online database is a database accessible from a local network or the Internet, as opposed to one that is stored locally on an individual computer or its attached storage (such as a CD). Online databases are hosted on websites, made available as software as a service products accessible via a web browser. They may be free or require payment, such as by a monthly subscription. Some have enhanced features such as collaborative editing and email notification.
Q.4 Explain the selection and evaluation criteria of online databases.
Ans
1. All databases are different
The first isn’t a criterion. The first one is premise you should understand in all this process – each database is as different as cars produced by different vendors. Cars are very different starting from Zaporozhec until Ferrari, Maybach and following monsters. The same situation is with databases. You won’t be able to carry a big truck load into the Ferrari. If you’d try to use the same positions for automatic differential gear as for manual either you’ll break it or gear box. The same with databases – they are produced by different vendors, they have different architectures and you cannot use the same best practices for DBMS X as you’ve used for DBMS Y.
2. Understand your current requirements but look also in future
This is the main criterion. You have to understand your functional and nonfunctional requirements and choose the best database that is most suited according these requirements. The main requirements should be at least as follows:
1) Data amount and type of data (text, binary, spatial or other specific data types);
2) Number of simultaneous users (simultaneous requests to database);
3) Availability i.e. how much outage you can allow for your db;
4) Scalability – what you’ll do when data and user count in your database will grow;
5) Security – how much you’ll need such features like secure data, data encryption, user management, data access according to different privileges.
6) Manageability and administration – how easy and user friendly you’d like to manage database;
7) What other cool features you’d like to get out from your db.
It is reasonable to create your requirements according to your resources and real necessities. Of course it is advisable to look a bit into the future and modify requirements accordingly, but that should be done with care.
Some examples. You have always to think whether you really need 99.99% availability (actually that means less than hour downtime in a year!) if your system at nights are used by a pair of users and they can easily sometimes wait till the morning. Security is very important thing however you have to remember that there aren’t real cure against stupidity of the people. High availability of course is very complex affair and for example your database is running doesn’t mean customers can access it in case of network failure. You have to remember also about such things like backup air-conditioners otherwise in a hot summer day after broken air-conditioner none of the database mirroring, clustering and standby databases, diesel generator or God only knows what else would help 🙂
3. Take into account your previous experience
Take into account your previous experience and skills working with data bases. As I’ve said, they are different. If you’d choose unknown or semi-unknown data base, you’d need time to get acquainted with it, to understand its supported tools, to understand its features and strengths, to understand its weaknesses, which usually are not pointed out on the top page 😉 Probably you’ll have to send your database administrators and developers to extra courses costing extra money and absence of workers.
4. Existing situation – should cooperate with existing systems or a new first DBMS
There is a big difference whether your project is the first one in some area or it should take into consideration many other systems it should collaborate with. In the first case you have much bigger freedom of choices. In second case you have to deal with the fact that usually databases from one vendor talk better with the same vendor’s databases than his rival DBMSes. In case of already existing databases and different DB for your new project, you and/or your customer has to deal with different supports thus introducing bigger complexity not only on technical level, but also on administrative level.
5. Evaluate existing labor market
Wherever you are usually there is different situation with different DB developers and administrators. Make sure that you will be able to find a knowledgeable person in case of your db failure. Either he is your employee or specialist from Consultancy Company; you should have at least one but better more than one person you trust. It will be very unpleasing to pay big money per day to somebody restoring your db but it will be much more unpleasing to realize that there is no one who can do that.
The same story is about usual working process. Administrators and developers of different DBMSes have different salaries. This might vary from time to time and place to place but definitely there are differences. Available labor market for a chosen DB will directly influence salaries, it will directly influence whether you can get a new developer in case the previous one performs worse than you expected.
6. Evaluate direct expenses
6.1. Licenses
This is the most common thing everybody speaks about in db selection process. One can find numerous articles and claims that DB X is cheaper than DB Y comparing only pure license costs. Actually cost of the license is only one of the factors that directly influence cost, not to speak about indirect cost components. Speaking about license costs one has to remember:
- DBMS usually have different licensing models, for example, based on number of users, CPU count and RAM amount.
- Official license cost usually is only the starting point. Probably you can get some discounts especially if you are a big customer.
- Biggest DBMS usually have different editions, like enterprise and standard having important differences in license costs. Do you really always need enterprise edition, although this usually is the first bet? Again remember the first point – databases are different – therefore never compare the same edition prices for different DBMSes mechanically. Look inside. Look inside what each edition actually contains, what is enterprise for database X might be just standard for database Y.
- Explore DB licensing schemes, explore discounts, explore additional features and options. Don’t take DB sales person called final cost for granted, ask him to explain breakdown of it.
6.2. Support
Understand how much support costs and what does it offer? Usually there are different support levels with different costs and offered features.
6.3. Additional features
Some databases have additional options even to enterprise edition. For additional price of course. So you’ll pay both for your enterprise/standard edition as well as additional features. This is the case when you should carefully analyze whether you really need additional option as well as you can ask for more discount from sales rep 😉
6.4. Operating system requirements
Some databases require specific operating systems. Operating systems as well as databases are different and require different administration and maintenance skills. In process of evaluating costs for operating system one can use similar approach as defined in this article for databases.
7. Database failure
What is the probability that your chosen database would fail to respond? Who would recover it? What is the predicted time to recover it? How much money you or your customer would loose during failure and recovery?
8. Specific industry
There are some specific industries (for example geographic information systems) requiring special databases or at least databases having support for that specific industry. Of course if you’d like you can be the pioneer and start to adapt your chosen database for the specific industry however that might be really resource and time consuming task.
9. Specific product
If you have chosen a specific product working with different databases then the first thing is to assure it hasn’t only negative consequences of such applications. If you are definitely sure about your decision then clarify which is the preferred database. If all databases are equally good then you can be sure they are equally bad as well 😉 I definitely don’t wish you to be the first user on a new database for your chosen product 😉
10. Existing user community, available online resources and popularity
Unless you aren’t born as a pioneer it is worth to understand what are the existing user community resources – forums, e-mail lists, informal user groups, conferences and seminars. Also clarify what articles, whitepapers, manuals and other online resources which will make your life easier in case of emergency. Most probably you won’t like to be the first one to solve too many problems; it is enough to be the first one to solve a few problems.
11. Necessary hardware
Understand what are the supported hardware platforms and minimal hardware parameters. Of course it depends on the planned amount of data, simultaneous user count and predicted load on your database.
12. Decision criteria summary
To be fair I don’t hope you’ll always follow decision process and decision criteria explained above 😉 Even more – I’m quite sure many times when you are going to create a small system for 5 users you don’t care about high availability, security etc. BUT there is one most important thing I’d like to highlight – the cost of database isn’t the same as the cost of its license! No! The bigger your system grows the less impact on overall expenses license cost has.
If you decide to use TCO method to choose your db then above mentioned criteria can be used as some of input criteria for TCO calculation.
13. Similar links
There is an article by Lewis Cunningham “A Complete Newbie’s Guide to Choosing a Database” explaining some similar thoughts about the same topic.
Database list
1. Most popular databases
According to Gartner Relational Database Market Share there are three main players in commercial database market. These DBMSes are Oracle, DB2 and SQL Server by Oracle, IBM and Microsoft accordingly. Competition usually is quite good thing, there is endless rivalry among them who is the best (usually without precise definition what does it mean „best”) and also we all are able to use express editions (no license cost) of their products.
From the open source products I’d like firstly to mention two of them MySQL and PostgreSQL. Motto of the first one is „The world’s most popular open source database”, but for the second „The world’s most advanced open source database”. As you most probably have already figured out they have their fight on their own, as well as of course trying to get a piece of cake from big three. There is a short article explaining some differences between above mentioned two as well as some other DBMSes.
Q.5 Write short notes on the following:
- a) Microprocessor
- b) Hard disk
- c) RAM
- d) ROM
- e) USB
ans
Microprocessor
A microprocessor is a computer processor where the data processing logic and control is included on a single integrated circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer’s central processing unit. The integrated circuit is capable of interpreting and executing program instructions and performing arithmetic operations.[1] The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, and provides results (also in binary form) as output. Microprocessors contain both combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system.
The integration of a whole CPU onto a single or a few integrated circuits using Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automated metal-oxide-semiconductor (MOS) fabrication processes, resulting in a relatively low unit price. Single-chip processors increase reliability because there are much fewer electrical connections that could fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same according to Rock’s law.
Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits, typically of TTL type. Microprocessors combined this into one or a few large-scale ICs. The first commercially available microprocessor was the Intel 4004.
Continued increases in microprocessor capacity have since rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Hard disk
A hard disk drive (HDD), hard disk, hard drive, or fixed disk[b] is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage and one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces.[2] Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data even when powered off.[3][4][5] Modern HDDs are typically in the form of a small rectangular box.
Introduced by IBM in 1956,[6] HDDs were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like cell phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly (by exabytes shipped[7]), sales revenues and unit shipments are declining because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, somewhat better reliability,[8][9] and much lower latency and access times.[10][11][12][13]
The revenues for SSDs, most of which use NAND flash memory, slightly exceed those for HDDs.[14] Flash storage products had more than twice the revenue of hard disk drives as of 2017.[15] Though SSDs have four to nine times higher cost per bit,[16][17] they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important.[12][13] Cost per bit for SSDs is falling, and the price premium over HDDs has narrowed.[17]
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion (109) bytes). Typically, some of an HDD’s capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Also there is confusion regarding storage capacity, since capacities are stated in decimal gigabytes (powers of 1000) by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified by the time required to move the heads to a track or cylinder (average access time) adding the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial Attached SCSI) cables.
RAM
Random-access memory (RAM; /ræm/) is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code.[1][2] A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory, in contrast with other direct-access data storage media (such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory), where the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.
RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, and RAM devices often have multiple data lines and are said to be “8-bit” or “16-bit”, etc. devices.[clarification needed]
In today’s technology, random-access memory takes the form of integrated circuit (IC) chips with MOS (metal-oxide-semiconductor) memory cells. RAM is normally associated with volatile types of memory (such as dynamic random-access memory (DRAM) modules), where stored information is lost if power is removed, although non-volatile RAM has also been developed.[3] Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash.
The two main types of volatile random-access semiconductor memory are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Commercial uses of semiconductor RAM date back to 1965, when IBM introduced the SP95 SRAM chip for their System/360 Model 95 computer, and Toshiba used DRAM memory cells for its Toscal BC-1411 electronic calculator, both based on bipolar transistors. Commercial MOS memory, based on MOS transistors, was developed in the late 1960s, and has since been the basis for all commercial semiconductor memory. The first commercial DRAM IC chip, the Intel 1103, was introduced in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992.
ROM
Read-only memory (ROM) is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM cannot be electronically modified after the manufacture of the memory device. Read-only memory is useful for storing software that is rarely changed during the life of the system, also known as firmware. Software applications (like video games) for programmable devices can be distributed as plug-in cartridges containing ROM.
Read-only memory strictly refers to memory that is hard-wired, such as diode matrix or a mask ROM integrated circuit (IC), which cannot be electronically[a] changed after manufacture. Although discrete circuits can be altered in principle, through the addition of bodge wires and/or the removal or replacement of components, ICs cannot. Correction of errors, or updates to the software, require new devices to be manufactured and to replace the installed device.
Floating-gate ROM semiconductor memory in the form of erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) and flash memory can be erased and re-programmed. But usually, this can only be done at relatively slow speeds, may require special equipment to achieve, and is typically only possible a certain number of times.[1]
The term “ROM” is sometimes used to mean a ROM device containing specific software, or a file with software to be stored in EEPROM or Flash Memory. For example, users modifying or replacing the Android operating system describe files containing a modified or replacement operating system as “custom ROMs” after the type of storage the file used to be written to.
USB
Universal Serial Bus (USB) is an industry standard that establishes specifications for cables, connectors and protocols for connection, communication and power supply (interfacing) between computers, peripherals and other computers.[3] A broad variety of USB hardware exists, including 14 different connectors, of which USB-C is the most recent.
Released in 1996, the USB standard is maintained by the USB Implementers Forum (USB-IF). The four generations of USB specifications are: USB 1.x, USB 2.0, USB 3.x, and USB4.
One Comment
I was able tto find good advice from your articles.