BIG
DATA
Big data is a term that describes the large
volume of data – both structured and unstructured – that inundates a business
on a day-to-day basis. But it’s not the amount of data that’s important. It’s
what organizations do with the data that matters. Big data can be analyzed for
insights that lead to better decisions and strategic business moves.
WHY IS BIG DATA IMPORTANT?
The importance of
big data doesn’t revolve around how much data you have, but what you do with
it. You can take data from any source and analyse it to find answers that
enable
1) Cost reductions
2) Time reductions
3) New product
development and optimized offerings
4) Smart decision
making.
When you combine
big data with high-powered analytics, you can accomplish business-related tasks
such as:
- Determining root causes of failures, issues and defects in near-real time.
- Generating coupons at the point of sale based on the customer’s buying habits.
- Recalculating entire risk portfolios in minutes.
- Detecting fraudulent behavior before it affects your organization.
THE 3V’S OF BIG DATA
Many of the technical industry
follows Gartner’s ‘3Vs’ model to define Big Data. Data that is high in:
- · Volume
- · Velocity
- · Variety
The volume of data
organisations handle can progress from megabytes through to terabytes and even
petabytes. In terms of velocity, data has gone from being handled in batches
and periodically to having to be processed in real time. The variety of data
has also diversified from simple tables and databases through to photo, web,
mobile and social data, and the most challenging: unstructured data.
- · Volume: Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
- Velocity: Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
- Variety: Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.
At SAS, we consider two additional dimensions when
it comes to big data:
- · Variability. In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
- Complexity. Today's data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.
HOW BIG IS ‘BIG DATA’?
Every
day, we create 2.5 quintillion bytes of data – so much, that 90% of data in the
world today has been created in the last two years alone.
When
data sets get so big that they cannot be analysed by traditional data
processing application tools, it becomes known as ‘Big Data’.
As
different companies have varied ceilings on how much data they can handle,
depending on their database management tools, there is no set level where data
becomes ‘big’.
This
means that Big Data and analytics tend to go hand-in-hand, as without being
able to analyse the data it becomes meaningless.
WHAT IS BIG DATA?
Big data means really a big data; it is a collection of large datasets
that cannot be processed using traditional computing techniques. Big data is
not merely a data; rather it has become a complete subject, which involves
various tools, techniques and frameworks.
WHAT COMES UNDER BIG DATA?
Big data involves the data produced by different devices and
applications. Given below are some of the fields that come under the umbrella
of Big Data.
- · Black Box Data: It is a component of helicopter, airplanes, and jets, etc. It captures voices of the flight crew, recordings of microphones and earphones, and the performance information of the aircraft.
- · Social Media Data: Social media such as Facebook and Twitter hold information and the views posted by millions of people across the globe.
- · Stock Exchange Data: The stock exchange data holds information about the ‘buy’ and ‘sell’ decisions made on a share of different companies made by the customers.
- · Power Grid Data: The power grid data holds information consumed by a particular node with respect to a base station.
- · Transport Data: Transport data includes model, capacity, distance and availability of a vehicle.
- · Search Engine Data: Search engines retrieve lots of data from different databases.
- · Structured data: Relational data.
- · Semi Structured data: XML data.
- · Unstructured data: Word, PDF, Text, Media Logs.
BENEFITS OF BIG DATA
Big data is really critical to our life and its emerging as one of the
most important technologies in modern world. Follow are just few benefits which
are very much known to all of us:
- · Using the information kept in the social network like Facebook, the marketing agencies are learning about the response for their campaigns, promotions, and other advertising mediums.
- · Using the information in the social media like preferences and product perception of their consumers, product companies and retail organizations are planning their production.
- · Using the data regarding the previous medical history of patients, hospitals are providing better and quick service.
BIG DATA TECHNOLOGIES
Big data technologies are important in providing more accurate analysis,
which may lead to more concrete decision-making resulting in greater
operational efficiencies, cost reductions, and reduced risks for the business.
To harness the power of big data, you would require an infrastructure
that can manage and process huge volumes of structured and unstructured data in
real-time and can protect data privacy and security.
There are various technologies in the market from different vendors
including Amazon, IBM, Microsoft, etc., to handle big data. While looking into
the technologies that handle big data, we examine the following two classes of
technology:
Operational Big Data
This includes systems like Mongo DB that provide operational
capabilities for real-time, interactive workloads where data is primarily
captured and stored.
No SQL Big Data systems are designed to take advantage of new cloud
computing architectures that have emerged over the past decade to allow massive
computations to be run inexpensively and efficiently. This makes operational
big data workloads much easier to manage, cheaper, and faster to implement.
Some No SQL systems can provide insights into patterns and trends based
on real-time data with minimal coding and without the need for data scientists
and additional infrastructure.
Analytical Big Data
This includes systems like Massively Parallel Processing (MPP) database
systems and MapReduce that provide analytical capabilities for retrospective
and complex analysis that may touch most or all of the data.
MapReduce provides a new method of analysing data that is complementary
to the capabilities provided by SQL, and a system based on MapReduce that can
be scaled up from single servers to thousands of high and low end machines.
These two classes of technology are complementary and frequently
deployed together.
HADOOP:
Hadoop is an open source, Java-based
programming framework that supports the processing and storage of extremely
large data sets in a distributed computing environment. It is part of the Apache project sponsored by the
Apache Software Foundation.
Hadoop makes it possible to run applications on
systems with thousands of commodity hardware nodes, and to handle thousands of terabytes of data. Its distributed
file system facilitates
rapid data
transfer rates among nodes
and allows the system to continue operating in case of a node failure. This
approach lowers the risk of catastrophic system failure and unexpected data
loss, even if a significant number of nodes become inoperative. Consequently,
Hadoop quickly emerged as a foundation for big data processing tasks, such as scientific
analytics, business and sales planning, and processing enormous volumes of
sensor data, including from internet
of things sensors.
Hadoop was created by computer scientists Doug
Cutting and Mike Cafarella in 2006 to support distribution for the Nutch search
engine. It was inspired by Google's
MapReduce, a software framework in which an application is broken down into numerous small parts. Any
of these parts, which are also called fragments or blocks, can be run on any
node in the cluster. After years of development within the open source
community, Hadoop 1.0 became publically available in November 2012 as part of
the Apache project sponsored by the Apache Software Foundation.
Since its initial release, Hadoop has
been continuously developed and updated. The second iteration of Hadoop (Hadoop 2) improved resource management and scheduling. It
features a high-availability file-system option and support for Microsoft Windows and other components to expand the
framework's versatility for data processing and analytics.
WHAT
IS HADOOP?
Organizations can deploy Hadoop
components and supporting software packages in their local data centre.
However, most big data projects depend on short-term use of substantial
computing resources. This type of usage is best-suited to highly scalable public cloud services, such as Amazon Web Services
(AWS), Google Cloud Platform and Microsoft Azure.
Public cloud providers often support Hadoop components through basic services,
such as AWS Elastic Compute Cloud and Simple Storage Service instances. However, there are also
services tailored specifically for Hadoop-type tasks, such as AWS Elastic MapReduce, Google Cloud Dataproc and Microsoft Azure HDInsight.
Hadoop modules
and projects
As a software framework, Hadoop is composed of numerous functional
modules. At a minimum, Hadoop uses Hadoop Common as a kernel to provide the framework's essential
libraries. Other components include Hadoop Distributed File System (HDFS), which is capable of
storing data across thousands of commodity servers to achieve high bandwidth
between nodes; Hadoop Yet Another Resource Negotiator (YARN), which provides
resource management and scheduling for user applications; and Hadoop MapReduce, which provides the programming model used to
tackle large distributed data processing -- mapping data and reducing it to a
result.
Hadoop also supports a range of related projects that can complement and
extend Hadoop's basic capabilities. Complementary software packages include:
- · Apache Flume. A tool used to collect, aggregate and move huge amounts of streaming data into HDFS.
- · Apache HBase. An open source, nonrelational, distributed database;
- · Apache Hive. A data warehouse that provides data summarization, query and analysis;
- · Cloudera Impala. A massively parallel processing database for Hadoop, originally created by the software company Cloudera, but now released as open source software;
- · Apache Oozie. A server-based workflow scheduling system to manage Hadoop jobs;
- · Apache Phoenix. An open source, massively parallel processing, relational database engine for Hadoop that is based on Apache HBase;
- · Apache Pig. A high-level platform for creating programs that run on Hadoop;
- · Apache Sqoop. A tool to transfer bulk data between Hadoop and structured data stores, such as relational databases;
- · Apache Spark. A fast engine for big data processing capable of streaming and supporting SQL, machine learning and graph processing;
- · Apache Storm. An open source data processing system; and
- · Apache Zookeeper. An open source configuration, synchronization and naming registry service for large distributed systems.











