Our editors will review what you’ve submitted and determine whether to revise the article.
Programming languages are the languages with which a programmer implements a piece of software to run on a computer. The earliest programming languages were assembly languages, not far removed from the binary-encoded instructions directly executed by the computer. By the mid-1950s, programmers began to use higher-level languages.
Two of the first higher-level languages were FORTRAN (Formula Translator) and ALGOL (Algorithmic Language), which allowed programmers to write algebraic expressions and solve scientific computing problems. As learning to program became increasingly important in the 1960s, a stripped-down version of FORTRAN called BASIC (Beginner’s All-Purpose Symbolic Instruction Code) was developed at Dartmouth College. BASIC quickly spread to other academic institutions, and by 1980 versions of BASIC for personal computers allowed even students at elementary schools to learn the fundamentals of programming. Also, in the mid-1950s, COBOL (Common Business-Oriented Language) was developed to support business programming applications that involved managing information stored in records and files.
The trend since then has been toward developing increasingly abstract languages, allowing the programmer to communicate with the machine at a level ever more remote from machine code. COBOL, FORTRAN, and their descendants (Pascal and C, for example) are known as imperative languages, since they specify as a sequence of explicit commands how the machine is to go about solving the problem at hand. These languages were also known as procedural languages, since they allowed programmers to develop and reuse procedures, subroutines, and functions to avoid reinventing basic tasks for every new application.
Other high-level languages are called functional languages, in that a program is viewed as a collection of (mathematical) functions and its semantics are very precisely defined. The best-known functional language of this type is LISP (List Processing), which in the 1960s was the mainstay programming language for AI applications. Successors to LISP in the AI community include Scheme, Prolog, and C and C++ (see below). Scheme is similar to LISP except that it has a more formal mathematical definition. Prolog has been used largely for logic programming, and its applications include natural language understanding and expert systems such as MYCIN. Prolog is notably a so-called nonprocedural, or declarative, language in the sense that the programmer specifies what goals are to be accomplished but not how specific methods are to be applied to attain those goals. C and C++ have been used widely in robotics, an important application of AI research. An extension of logic programming is constraint logic programming, in which pattern matching is replaced by the more general operation of constraint satisfaction.
Another important development in programming languages through the 1980s was the addition of support for data encapsulation, which gave rise to object-oriented languages. The original object-oriented language was called Smalltalk, in which all programs were represented as collections of objects communicating with each other via message-passing. An object is a set of data together with the methods (functions) that can transform that data. Encapsulation refers to the fact that an object’s data can be accessed only through these methods. Object-oriented programming has been very influential in computing. Languages for object-oriented programming include C++, Visual BASIC, and Java.
Java is unusual because its applications are translated not into a particular machine language but into an intermediate language called Java Bytecode, which runs on the Java Virtual Machine (JVM). Programs on the JVM can be executed on most contemporary computer platforms, including Intel-based systems, Apple Macintoshes, and various Android-based smartphones and tablets. Thus, Linux, iOS, Windows, and other operating systems can run Java programs, which makes Java ideal for creating distributed and Web-based applications. Residing on Web-based servers, Java programs may be downloaded and run in any standard Web browser to provide access to various services, such as a client interface to a game or entry to a database residing on a server.
At a still higher level of abstraction lie declarative and scripting languages, which are strictly interpreted languages and often drive applications running in Web browsers and mobile devices. Some declarative languages allow programmers to conveniently access and retrieve information from a database using “queries,” which are declarations of what to do (rather than how to do it). A widely used database query language is SQL (Structured Query Language) and its variants (e.g., MySQL and SQLite). Associated with these declarative languages are those that describe the layout of a Web page on the user’s screen. For example, HTML (HyperText Markup Language) supports the design of Web pages by specifying their structure and content. Gluing the Web page together with the database is the task of a scripting language (e.g., PHP), which is a vehicle for programmers to integrate declarative statements of HTML and MySQL with imperative actions that are required to effect an interaction between the user and the database. An example is an online book order with Amazon.com, where the user queries the database to find out what books are available and then initiates an order by pressing buttons and filling appropriate text areas with his or her ordering information. The software that underlies this activity includes HTML to describe the content of the Web page, MySQL to access the database according to the user’s requests, and PHP to control the overall flow of the transaction.
Computer programs written in any language other than machine language must be either interpreted or translated into machine language (“compiled”). As suggested above, an interpreter is software that examines a computer program one instruction at a time and calls on code to execute the machine operations required by that instruction.
A compiler is software that translates an entire computer program into machine code that is saved for subsequent execution whenever desired. Much work has been done on making both the compilation process and the compiled code as efficient as possible. When a new language is developed, it is usually interpreted at first. If it later becomes popular, a compiler is developed for it, since compilation is more efficient than interpretation.
There is an intermediate approach, which is to compile code not into machine language but into an intermediate language (called a virtual machine) that is close enough to machine language that it is efficient to interpret, though not so close that it is tied to the machine language of a particular computer. It is this approach that provides the Java language with its computer platform independence via the JVM.
Security and information assurance
Security and information assurance refers to policy and technical elements that protect information systems by ensuring their availability, integrity, authentication, and appropriate levels of confidentiality. Information security concepts occur in many areas of computer science, including operating systems, computer networks, databases, and software.
Operating system security involves protection from outside attacks by malicious software that interferes with the system’s completion of ordinary tasks. Network security provides protection of entire networks from attacks by outsiders. Information in databases is especially vulnerable to being stolen, destroyed, or modified maliciously when the database server is accessible to multiple users over a network. The first line of defense is to allow access to a computer only to authorized users by authenticating those users by a password or similar mechanism.
However, clever programmers (known as hackers) have learned how to evade such mechanisms by designing computer viruses, programs that replicate themselves, spread among the computers in a network, and “infect” systems by destroying resident files and applications. Data can be stolen by using devices such as “Trojan horses,” programs that carry out a useful task but also contain hidden malicious code, or simply by eavesdropping on network communications. The need to protect sensitive data (e.g., to protect national security or individual privacy) has led to advances in cryptography and the development of encryption standards that provide a high level of confidence that the data is safe from decoding by even the most clever attacks.
Software engineering
Software engineering is the discipline concerned with the application of theory, knowledge, and practice to building reliable software systems that satisfy the computing requirements of customers and users. It is applicable to small-, medium-, and large-scale computing systems and organizations. Software engineering uses engineering methods, processes, techniques, and measurements. Software development, whether done by an individual or a team, requires choosing the most appropriate tools, methods, and approaches for a given environment.
Software is becoming an ever larger part of the computer system and has become complicated to develop, often requiring teams of programmers and years of effort. Thus, the development of a large piece of software can be viewed as an engineering task to be approached with care and attention to cost, reliability, and maintainability of the final product. The software engineering process is usually described as consisting of several phases, called a life cycle, variously defined but generally consisting of requirements development, analysis and specification, design, construction, validation, deployment, operation, and maintenance.
Concern over the high failure rate of software projects has led to the development of nontraditional software development processes. Notable among these is the agile software process, which includes rapid development and involves the client as an active and critical member of the team. Agile development has been effectively used in the development of open-source software, which is different from proprietary software because users are free to download and modify it to fit their particular application needs. Particularly successful open-source software products include the Linux operating system, the Firefox Web browser, and the Apache OpenOffice word processing/spreadsheet/presentation suite.
Regardless of the development methodology chosen, the software development process is expensive and time-consuming. Since the early 1980s, increasingly sophisticated tools have been built to aid the software developer and to automate the development process as much as possible. Such computer-aided software engineering (CASE) tools span a wide range of types, from those that carry out the task of routine coding when given an appropriately detailed design in some specified language to those that incorporate an expert system to enforce design rules and eliminate software defects prior to the coding phase.
As the size and complexity of software has grown, the concept of reuse has become increasingly important in software engineering, since it is clear that extensive new software cannot be created cheaply and rapidly without incorporating existing program modules (subroutines, or pieces of computer code). One of the attractive aspects of object-oriented programming is that code written in terms of objects is readily reused. As with other aspects of computer systems, reliability (usually rather vaguely defined as the likelihood of a system to operate correctly over a reasonably long period of time) is a key goal of the finished software product.
Sophisticated techniques for testing software have also been designed. For example, unit testing is a strategy for testing every individual module of a software product independently before the modules are combined into a whole and tested using “integration testing” techniques.
The need for better-trained software engineers has led to the development of educational programs in which software engineering is a separate major. The recommendation that software engineers, similar to other engineers, be licensed or certified has gained increasing support, as has the process of accreditation for software engineering degree programs.