big data

computer science
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Print
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

big data, in technology, a term for large datasets. The term originated in the mid-1990s and was likely coined by Doug Mashey, who was chief scientist at the American workstation manufacturer SGI (Silicon Graphics, Inc.). Big data is traditionally characterized by the “three V’s”: volume, velocity, and variety. Volume naturally refers to the large size of such datasets; velocity refers to the speed with which such data are produced and analyzed; and variety refers to the many different types of data, which can be in text, audio, video, or other forms. (Two further V’s are sometimes added: value, referring to the usefulness of the data; and veracity, referring to the data’s truthfulness.)

Since the term big data was coined, the amount of data has grown exponentially. In 1999 an estimated 1.5 exabytes (1 exabyte = 1 billion gigabytes) of data were produced worldwide; in 2020 that number grew to an estimated 64 zettabytes (1 zettabyte = 1,000 exabytes). About the turn of the 21st century, big data referred to datasets of a few hundred gigabytes each; in 2021 EsNet, the U.S. Department of Energy’s data-sharing network, carried more than 1 exabyte of data.

In the 2020s nearly every industry uses big data. Entertainment companies, particularly streaming companies, use the data generated by consumers to determine which song or video a given consumer may want to see next or even to determine what kind of movie or television series the companies should produce. Banks rely on big data to find patterns that could indicate fraud or persons who may be a credit risk. Manufacturers use big data to detect faults in the production process and to avoid costly shutdowns by finding the best time for equipment maintenance.

New tools have been developed to help analyze big data. Such datasets are often stored in NoSQL databases. Traditional databases are in tabular format, with rows and columns, and the computer language SQL was designed with such relational databases in mind. However, extremely large datasets whose data are unstructured (that is, tending to be qualitative, such as text, video, or audio) are called NoSQL, because SQL may not be the best tool for working with such data. Some of the most popular tools for working with big data, such as Hadoop and Spark, have been maintained and developed by the Apache Software Foundation, a nonprofit organization that supports many open-source software projects.

Working with big data presents certain challenges. Storing large amounts of data requires a significant investment in equipment. Specialized buildings called data centres are used by companies such as Google, Amazon, and Microsoft for storing data, and the largest data centres require billions of litres of water per year to keep the buildings cool. Basic data analysis problems such as ensuring that the data are accurate and complete become much more difficult as the amount of data grows. Data security is very important, particularly when the data contain sensitive information about individuals and their habits.

Erik Gregersen