Wednesday, 22 March 2017

BIG DATA AND HADOOP

BIG DATA
Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what organizations do with the data that matters. Big data can be analyzed for insights that lead to better decisions and strategic business moves.

WHY IS BIG DATA IMPORTANT?
The importance of big data doesn’t revolve around how much data you have, but what you do with it. You can take data from any source and analyse it to find answers that enable
1) Cost reductions
2) Time reductions
3) New product development and optimized offerings
4) Smart decision making.
When you combine big data with high-powered analytics, you can accomplish business-related tasks such as:
  •  Determining root causes of failures, issues and defects in near-real time.
  • Generating coupons at the point of sale based on the customer’s buying habits.
  • Recalculating entire risk portfolios in minutes.
  • Detecting fraudulent behavior before it affects your organization.

 THE 3V’S OF BIG DATA
Many of the technical industry follows Gartner’s ‘3Vs’ model to define Big Data. Data that is high in:
  • ·         Volume
  • ·         Velocity
  • ·         Variety

The volume of data organisations handle can progress from megabytes through to terabytes and even petabytes. In terms of velocity, data has gone from being handled in batches and periodically to having to be processed in real time. The variety of data has also diversified from simple tables and databases through to photo, web, mobile and social data, and the most challenging: unstructured data.
  • ·        Volume: Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden.
  •       Velocity: Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
  •        Variety: Data comes in all types of formats – from structured, numeric data in traditional databases to unstructured text documents, email, video, audio, stock ticker data and financial transactions.

At SAS, we consider two additional dimensions when it comes to big data:
  • ·         Variability. In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. Even more so with unstructured data.
  •       Complexity. Today's data comes from multiple sources, which makes it difficult to link, match, cleanse and transform data across systems. However, it’s necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.


HOW BIG IS ‘BIG DATA’?
Every day, we create 2.5 quintillion bytes of data – so much, that 90% of data in the world today has been created in the last two years alone.
When data sets get so big that they cannot be analysed by traditional data processing application tools, it becomes known as ‘Big Data’.
As different companies have varied ceilings on how much data they can handle, depending on their database management tools, there is no set level where data becomes ‘big’.
This means that Big Data and analytics tend to go hand-in-hand, as without being able to analyse the data it becomes meaningless.

WHAT IS BIG DATA?

Big data means really a big data; it is a collection of large datasets that cannot be processed using traditional computing techniques. Big data is not merely a data; rather it has become a complete subject, which involves various tools, techniques and frameworks.

WHAT COMES UNDER BIG DATA?

Big data involves the data produced by different devices and applications. Given below are some of the fields that come under the umbrella of Big Data.
  • ·      Black Box Data: It is a component of helicopter, airplanes, and jets, etc. It captures voices of the flight crew, recordings of microphones and earphones, and the performance information of the aircraft.
  • ·       Social Media Data: Social media such as Facebook and Twitter hold information and the views posted by millions of people across the globe.
  • ·       Stock Exchange Data: The stock exchange data holds information about the ‘buy’ and ‘sell’ decisions made on a share of different companies made by the customers.
  • ·        Power Grid Data: The power grid data holds information consumed by a particular node with respect to a base station.
  • ·        Transport Data: Transport data includes model, capacity, distance and availability of a vehicle.
  • ·        Search Engine Data: Search engines retrieve lots of data from different databases.
Thus Big Data includes huge volume, high velocity, and extensible variety of data. The data in it will be of three types.
  • ·         Structured data: Relational data.
  • ·         Semi Structured data: XML data.
  • ·         Unstructured data: Word, PDF, Text, Media Logs.

BENEFITS OF BIG DATA

Big data is really critical to our life and its emerging as one of the most important technologies in modern world. Follow are just few benefits which are very much known to all of us:
  • ·    Using the information kept in the social network like Facebook, the marketing agencies are learning about the response for their campaigns, promotions, and other advertising mediums.
  • ·    Using the information in the social media like preferences and product perception of their consumers, product companies and retail organizations are planning their production.
  • ·    Using the data regarding the previous medical history of patients, hospitals are providing better and quick service.

BIG DATA TECHNOLOGIES

Big data technologies are important in providing more accurate analysis, which may lead to more concrete decision-making resulting in greater operational efficiencies, cost reductions, and reduced risks for the business.
To harness the power of big data, you would require an infrastructure that can manage and process huge volumes of structured and unstructured data in real-time and can protect data privacy and security.
There are various technologies in the market from different vendors including Amazon, IBM, Microsoft, etc., to handle big data. While looking into the technologies that handle big data, we examine the following two classes of technology:

Operational Big Data

This includes systems like Mongo DB that provide operational capabilities for real-time, interactive workloads where data is primarily captured and stored.
No SQL Big Data systems are designed to take advantage of new cloud computing architectures that have emerged over the past decade to allow massive computations to be run inexpensively and efficiently. This makes operational big data workloads much easier to manage, cheaper, and faster to implement.
Some No SQL systems can provide insights into patterns and trends based on real-time data with minimal coding and without the need for data scientists and additional infrastructure.

Analytical Big Data

This includes systems like Massively Parallel Processing (MPP) database systems and MapReduce that provide analytical capabilities for retrospective and complex analysis that may touch most or all of the data.
MapReduce provides a new method of analysing data that is complementary to the capabilities provided by SQL, and a system based on MapReduce that can be scaled up from single servers to thousands of high and low end machines.
These two classes of technology are complementary and frequently deployed together.

HADOOP:
Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.

Hadoop makes it possible to run applications on systems with thousands of commodity hardware nodes, and to handle thousands of terabytes of data. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating in case of a node failure. This approach lowers the risk of catastrophic system failure and unexpected data loss, even if a significant number of nodes become inoperative. Consequently, Hadoop quickly emerged as a foundation for big data processing tasks, such as scientific analytics, business and sales planning, and processing enormous volumes of sensor data, including from internet of things sensors.
Hadoop was created by computer scientists Doug Cutting and Mike Cafarella in 2006 to support distribution for the Nutch search engine. It was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts, which are also called fragments or blocks, can be run on any node in the cluster. After years of development within the open source community, Hadoop 1.0 became publically available in November 2012 as part of the Apache project sponsored by the Apache Software Foundation.
Since its initial release, Hadoop has been continuously developed and updated. The second iteration of Hadoop (Hadoop 2) improved resource management and scheduling. It features a high-availability file-system option and support for Microsoft Windows and other components to expand the framework's versatility for data processing and analytics. 
WHAT IS HADOOP?
Organizations can deploy Hadoop components and supporting software packages in their local data centre. However, most big data projects depend on short-term use of substantial computing resources. This type of usage is best-suited to highly scalable public cloud services, such as Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure. Public cloud providers often support Hadoop components through basic services, such as AWS Elastic Compute Cloud and Simple Storage Service instances. However, there are also services tailored specifically for Hadoop-type tasks, such as AWS Elastic MapReduce, Google Cloud Dataproc and Microsoft Azure HDInsight.

Hadoop modules and projects

As a software framework, Hadoop is composed of numerous functional modules. At a minimum, Hadoop uses Hadoop Common as a kernel to provide the framework's essential libraries. Other components include Hadoop Distributed File System (HDFS), which is capable of storing data across thousands of commodity servers to achieve high bandwidth between nodes; Hadoop Yet Another Resource Negotiator (YARN), which provides resource management and scheduling for user applications; and Hadoop MapReduce, which provides the programming model used to tackle large distributed data processing -- mapping data and reducing it to a result.
Hadoop also supports a range of related projects that can complement and extend Hadoop's basic capabilities. Complementary software packages include:
  • ·      Apache Flume. A tool used to collect, aggregate and move huge amounts of streaming data into HDFS.
  • ·      Apache HBase. An open source, nonrelational, distributed database;
  • ·      Apache Hive. A data warehouse that provides data summarization, query and analysis;
  • ·     Cloudera Impala. A massively parallel processing database for Hadoop, originally created by the software company Cloudera, but now released as open source software;
  • ·      Apache Oozie. A server-based workflow scheduling system to manage Hadoop jobs;
  • ·      Apache Phoenix. An open source, massively parallel processing, relational database engine for Hadoop that is based on Apache HBase;
  • ·      Apache Pig. A high-level platform for creating programs that run on Hadoop;
  • ·      Apache Sqoop. A tool to transfer bulk data between Hadoop and structured data stores, such as relational databases;
  • ·      Apache Spark. A fast engine for big data processing capable of streaming and supporting SQL, machine learning and graph processing;
  • ·      Apache Storm. An open source data processing system; and
  • ·      Apache Zookeeper. An open source configuration, synchronization and naming registry service for large distributed systems.

Tuesday, 21 March 2017

Cloud Computing

CLOUD COMPUTING
      Cloud Computing is a model of delivering computing resources from the Internet to the user. It enables convenient for, on demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Services and solutions that are delivered and consumed in real time over internet are cloud services. Cloud computing enables companies to consume computer resources such as virtual machine. The cloud infrastructure is much more powerful and reliable than personal computing devices. The storage solutions provide the users and enterprises with various capabilities to store and process their data in either privately owned or third party data centers that may be located far from the users ranging in distance from across a city to across the world.

            Cloud Computing is a result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all the technologies, without the need for deep knowledge about or expertise with each one of them. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more “virtual” devices, each of which can be easily used and managed to perform computing tasks. Virtualization provides the agility required to speed up the operations and reduces the cost by increasing infrastructure utilization. Cloud Computing adopts the concepts from Service Oriented Architecture that can help the user break the problems into services that can be integrated to provide a solution. It provides a tools and technologies to build data and compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques. Cloud computing also leverages concepts from utility computing to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models.
Characteristics of Cloud Computing

·        Easy Access

Cloud based applications and files can be accessed easily on your company’s   network, whether you or your employees are in the office, at home, or traveling.
·        Reduced Cost
For the most part, cloud based applications only need to be purchased once, and are then distributed among users accessing the front-facing platform through the cloud. There is no need to buy multiple licenses for the same program for every one of your employees.
·        Enhanced Productivity
With the ability to pull data from anywhere within your company’s cloud, your employees will be able to work faster without needing to either physically get the files, or wait on an email response back for a file.
    • High Growth Potential
As you add more employees, all that needs to be done to get them on the cloud is for them to have login credentials created. Less setup time, and much faster than installing software on a machine.

·        Real-Time Data Changes

As changes are made to files in the cloud, other people can have access to  those  changes  immediately.  Employees  no  longer  need  to  re-download files in order to have the most up to date version.

·        Data Security

Since there is only one place where all the data is stored, it is much easier to protect it against a security breach. Cloud computing takes away most of the threat of data theft due to information being spread out among multiple computers.

·        Device Operability

Some cloud-based networks are accessible from smart phones and tablets. This is convenient for people not in the office, but who still need to maintain connectivity to the network.

·        Easily Monitored Performance

Seeing changes made to data, and seeing who makes the changes, makes it easy to monitor employee performance. This can be done easily at the end of the day, or even in real time.

·        Open Communication among Collaborators

The ability to communicate easily among employees working on one project is essential. Cloud computing makes this simple with the ability to either message another collaborator, or just change the data as they see fit.

·        Reduced Maintenance

Fewer software-based problems will come up with a cloud based system. Employee computers are basically just terminals which access the cloud, meaning if a computer were to become inoperable, it could easily be replaced.
Types of Cloud
·        Public Cloud
·        Private Cloud
·        Hybrid Cloud


Public Cloud
          A public cloud is basically the internet. Service providers use the internet to make resources, such as applications (also known as Software-as-a-service) and storage, available to the general public, or on a ‘public cloud.Examples of public clouds include Amazon Elastic Compute Cloud (EC2), IBM’s Blue Cloud, Sun Cloud, Google App Engine and Windows Azure Services  Platform.  For  users, these types of clouds  will  provide  the  best  economies  of  scale, are  inexpensive  to  set-up  because  hardware, application and bandwidth costs are covered by the provider.It’s a pay-per-usage model and the only costs incurred are based on the capacity that is used. There are some limitations, however; the public cloud may not be the right fit for every organization. The model can limit configuration, security, and SLA specificity, making it less-than-ideal for services using sensitive data that is subject to compliancy regulations. 
Private Cloud
Private clouds are data center architectures owned by a single company that provides flexibility, scalability, provisioning, automation and monitoring. The goal of a private cloud is not sell “as-a-service” offerings to external customers but instead to gain the benefits of cloud architecture without giving up the control of maintaining your own data center.
Private clouds can be expensive with typically modest economies of scale. This is usually not an option for the average Small-to-Medium sized business and is most typically put to use by large enterprises. Private clouds are driven by concerns around security and compliance, and keeping assets within the firewall.
     Hybrid Cloud         
By using  a hybrid  approach,  companies  can  maintain  control  of an internally managed private cloud while relying on the public cloud as needed.  For instance during peak periods individual applications, or portions of applications can be migrated to the Public Cloud.  This will also be beneficial during predictable outages: hurricane warnings, scheduled maintenance windows, rolling brown/blackouts.
The ability to maintain an off-premise disaster recovery site for most organizations is impossible due to cost. While there are lower cost solutions and alternatives the lower down the spectrum an organization gets, the capability to recover data quickly reduces. Cloud based Disaster Recovery (DR)/Business Continuity (BC) services allow organizations to contract fail over out  to  a Managed  Services  Provider  that  maintains  multi-tenant infrastructure for DR/BC, and specializes in getting business back online quickly.
Three Types of Cloud Computing Services
·        Infrastructure as a Service (IaaS)
·        Platform as a Service (PaaS)
·        Software as a Service (SaaS)
Infrastructure as a Service:
          The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating  systems  and  applications.  The  consumer  does  not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).
Platform as a Service:
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The  consumer  does  not  manage  or  control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

Software as a Service:


The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Tuesday, 7 February 2017

EX:1a    SUBSTITUTION & TRANSPOSITION TECHNIQUES CAESAR CIPHER 

AIM
To write a program for encrypting a plain text and decryption a cipher text using Caesar Cipher (shift cipher) substitution technique

ALGORITHM DESCRIPTION

·         It is a type of substitution cipher in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on.

·         The method is named after Julius Caesar, who used it in his private correspondence.

·         The transformation can be represented by aligning two alphabets; the cipher alphabet is the plain alphabet rotated left or right by some number of positions.

·         The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A = 0, B = 1, Z = 25.

·         Encryption of a letter x by a shift n can be described mathematically as,

En(x) = (x + n) mod26

·         Decryption is performed similarly,

Dn (x)=(x - n) mod26


PROGRAM
import java.io.*;
import java.util.Scanner;
public class CaeserCipher
 {
public static void main(String[] args)
{           
Scanner s=new Scanner(System.in);
System.out.println("Input Data to encypt:");
        String str = s.nextLine();
System.out.println("Input the key");
int key =s.nextInt();
         String encrypted = encrypt(str, key);
System.out.println("Encrypted Data :" + encrypted);
        String decrypted = decrypt(encrypted, key);
System.out.println("Decrypted Data:" + decrypted);
    }
public static String encrypt(String str, int key)
    {
       String encrypted = "";
for(int i = 0; i <str.length(); i++)
        {
int c = str.charAt(i);
if (Character.isUpperCase(c))
            {               
                c = c + (key % 26);
if (c > 'Z')
                c = c - 26;
            }
else if (Character.isLowerCase(c))
            {
                c = c + (key % 26);
if (c > 'z')
                c = c - 26;
            }
encrypted += (char) c;
        }
return encrypted;
    }
public static String decrypt(String str, int key)
    {
        String decrypted = "";
for(int i = 0; i <str.length(); i++)
        {
int c = str.charAt(i);
if (Character.isUpperCase(c))
            {
                c = c - (key % 26);
if (c < 'A')
                c = c + 26;
           }
else if (Character.isLowerCase(c))
            {
                c = c - (key % 26);
if (c < 'a')
                c = c + 26;
            }
decrypted += (char) c;
        }
return decrypted;
    }
}

OUTPUT


          


               RESULT
Thus the java program to implement Caesar Cipher was executed and the output was verified.

EX: 1b  SUBSTITUTION & TRANSPOSITION TECHNIQUES 
 PLAY FAIR CIPHER
   AIM
To write a program to encrypt a plain text and decrypt a cipher text using Playfair Ciph  er      substitution technique.

     ALGORITHM DESCRIPTION
 ·         The playfair cipher uses a 5 by 5 table containing a key word or phrase.
 ·         To generate the key table, first fill the spaces in the table with the letters of the keyword, then fill the remaining spaces with the rest of the letters of the alphabet in order (usually omitting "Q" to reduce the alphabet to fit; other versions put both "I" and "J" in the same space).

·         The key can be written in the top rows of the table, from left to right, or in some other pattern, such as a spiral beginning in the upper-left-hand corner and ending in the centre.

·         The keyword together with the conventions for filling in the 5 by 5 table constitutes the cipher key. To encrypt a message, one would break the message into diagrams (groups of 2 letters) such that, for example, "HelloWorld" becomes "HE LL OW OR LD", and map them out on the key table. Then apply the following 4 rules, to each pair of letters in the plaintext:

·         If both letters are the same (or only one letter is left), add an "X" after the first letter. Encrypt the new pair and continue. Some variants of Playfair use "Q" instead of "X", but any letter, itself uncommon as a repeated pair, will do.

·         If the letters appear on the same row of your table, replace them with the letters to their immediate right respectively (wrapping around to the left side of the row if a letter in the original pair was on the right side of the row).

·         If the letters appear on the same column of your table, replace them with the letters immediately below respectively (wrapping around to the top side of the column if a letter in the original pair was on the bottom side of the column).

·         If the letters are not on the same row or column, replace them with the letters on the same row respectively but at the other pair of corners of the rectangle defined by the original pair. The order is important – the first letter of the encrypted pair is the one that lies on the same row as the first letter of the plaintext pair.

·         To decrypt, use the INVERSE (opposite) of the last 3 rules, and the 1st as-is (dropping any extra "X"s, or "Q"s that do not make sense in the final message when finished).

PROGRAM
import java.awt.Point;
import java.util.*;
class Play
{
private static char[][] charTable;
private static Point[] positions;
private static String prepareText(String s, boolean chgJtoI)
{
s = s.toUpperCase().replaceAll("[^A-Z]", "");
return chgJtoI ? s.replace("J", "I") : s.replace("Q", "");
}
private static void createTbl(String key, boolean chgJtoI)
{
charTable = new char[5][5];
positions = new Point[26];
String s = prepareText(key + "ABCDEFGHIJKLMNOPQRSTUVWXYZ", chgJtoI);
int len = s.length();
for (int i = 0, k = 0; i < len; i++)
{
char c = s.charAt(i);
if (positions[c - 'A'] == null)
{
charTable[k / 5][k % 5] = c;
positions[c - 'A'] = new Point(k % 5, k / 5);
k++;
}
}
}
private static String codec(StringBuilder txt, int dir)
{
int len = txt.length();
for (int i = 0; i < len; i += 2)
{
char a = txt.charAt(i);
char b = txt.charAt(i + 1);
int row1 = positions[a - 'A'].y;
int row2 = positions[b - 'A'].y;
int col1 = positions[a - 'A'].x;
int col2 = positions[b - 'A'].x;
if (row1 == row2)
{
col1 = (col1 + dir) % 5;
col2 = (col2 + dir) % 5;
}
else if (col1 == col2)
{
row1 = (row1 + dir) % 5;
row2 = (row2 + dir) % 5;
}
else
{
int tmp = col1;
col1 = col2;
col2 = tmp;
}
txt.setCharAt(i, charTable[row1][col1]);
txt.setCharAt(i + 1, charTable[row2][col2]);
}
return txt.toString();
}
private static String encode(String s)
{
StringBuilder sb = new StringBuilder(s);
for (int i = 0; i < sb.length(); i += 2)
{
if (i == sb.length() - 1)
{
sb.append(sb.length() % 2 == 1 ? 'X' : "");
}
else if (sb.charAt(i) == sb.charAt(i + 1))
{
sb.insert(i + 1, 'X');
}
}
return codec(sb, 1);
}
private static String decode(String s)
{
return codec(new StringBuilder(s), 4);
}
public static void main (String[] args) throws java.lang.Exception
{
String key = "mysecretkey";
String txt = "CRYPTOLABS"; /* make sure string length is even */
/* change J to I */
boolean chgJtoI = true;
createTbl(key, chgJtoI);
String enc = encode(prepareText(txt, chgJtoI));
System.out.println("simulation of Playfair Cipher");
System.out.println("input message : " + txt);
System.out.println("encoded message : " + enc);
System.out.println("decoded message : " + decode(enc));
}
}
OUTPUT



 RESULT
Thus the java program to implement Playfair Cipher was executed and the output was verified.


    EX: 1c   SUBSTITUTION & TRANSPOSITION TECHNIQUES  HILL CIPHER

AIM
To write a program to encrypt and decrypt using  Hill cipher substitution technique

ALGORITHM DESCRIPTION

·         The Hill cipher is a substitution cipher invented by Lester S. Hill in 1929.

·         Each letter is represented by a number modulo 26. To encrypt a message, each block of n letters is multiplied by an invertible n × n matrix, again modulus 26.

·         To decrypt the message, each block is multiplied by the inverse of the matrix used for encryption. The matrix used for encryption is the cipher key, and it should be chosen randomly from the set of invertible n × n matrices (modulo 26).

·         The cipher can, be adapted to an alphabet with any number of letters.

·         All arithmetic just needs to be done modulo the number of letters instead of modulo 26.

PROGRAM
import java.util.*;
class hill
{
/* 3x3 key matrix for 3 characters at once */
public static int[][] keymat = new int[][]
{ { 1, 2, 1 }, { 2, 3, 2 }, { 2, 2, 1 } };
/* key inverse matrix */
public static int[][] invkeymat = new int[][]
{ { -1, 0, 1 }, { 2, -1, 0 }, { -2, 2, -1 } };
public static String key = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
private static String encode(char a, char b, char c)
{
String ret = "";
int x,y, z;
int posa = (int)a - 65;
int posb = (int)b - 65;
int posc = (int)c - 65;
x = posa * keymat[0][0] + posb * keymat[1][0] + posc * keymat[2][0];
y = posa * keymat[0][1] + posb * keymat[1][1] + posc * keymat[2][1];
z = posa * keymat[0][2] + posb * keymat[1][2] + posc * keymat[2][2];
a = key.charAt(x%26);
b = key.charAt(y%26);
c = key.charAt(z%26);
ret = "" + a + b + c;
return ret;
}
private static String decode(char a, char b, char c)
{
String ret = "";
int x,y,z;
int posa = (int)a - 65;
int posb = (int)b - 65;
int posc = (int)c - 65;
x = posa * invkeymat[0][0]+ posb * invkeymat[1][0] + posc * invkeymat[2][0];
y = posa * invkeymat[0][1]+ posb * invkeymat[1][1] + posc * invkeymat[2][1];
z = posa * invkeymat[0][2]+ posb * invkeymat[1][2] + posc * invkeymat[2][2];
a = key.charAt((x%26<0) ? (26+x%26) : (x%26));
b = key.charAt((y%26<0) ? (26+y%26) : (y%26));
c = key.charAt((z%26<0) ? (26+z%26) : (z%26));
ret = "" + a + b + c;
return ret;
}
public static void main (String[] args) throws java.lang.Exception
{
String msg;
String enc = "";
String dec = "";
int n;
msg = ("SecurityLaboratory");
System.out.println("simulation of Hill Cipher");
System.out.println("input message : " + msg);
msg = msg.toUpperCase();
msg = msg.replaceAll("\\s", ""); /* remove spaces */
n = msg.length() % 3;
/* append padding text X */
if (n != 0)
{
for(int i = 1; i<= (3-n);i++)
{
msg+= 'X';
}
}
System.out.println("padded message : " + msg);
char[] pdchars = msg.toCharArray();
for (int i=0; i < msg.length(); i+=3)
{
enc += encode(pdchars[i], pdchars[i+1], pdchars[i+2]);
}
System.out.println("encoded message : " + enc);
char[] dechars = enc.toCharArray();
for (int i=0; i< enc.length(); i+=3)
{
dec += decode(dechars[i], dechars[i+1], dechars[i+2]);
}
System.out.println("decoded message : " + dec);
}
}

OUTPUT




 RESULT
Thus the java program to implement Hill Cipher was executed and the output was verified.

EX: 1d   SUBSTITUTION & TRANSPOSITION  TECHNIQUES  VIGENERE CIPHER


AIM

To write  a program for encryption and decryption using Vigenere Cipher substitution technique

ALGORITHM DESCRIPTION

  • The Vigenere Cipher is a method of encrypting alphabetic text by using a series of different Caesar ciphers based on the letters of a keyword.

  • It is a simple form of polyalphabetic substitution.

  • To encrypt, a table of alphabets can be used, termed a Vigenere square, or Vigenere table. It consists of the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers.

  • At different points in the encryption process, the cipher uses a different alphabet from one of the rows used.

  • The alphabet at each point depends on a repeating keyword.
  
PROGRAM
import java.io.*;
importjava.util.Scanner;
public class VigenereCipher
{  
public static String encrypt(String text, final String key)
{        
    String res = "";        
text = text.toUpperCase();        
for (int i = 0, j = 0; i <text.length(); i++) {            
char c = text.charAt(i);           
if (c < 'A' || c > 'Z')                
continue;            
res += (char) ((c + key.charAt(j) - 2 * 'A') % 26 + 'A');            
        j = ++j % key.length();  }        
return res;    
}   
public static String decrypt(String text, final String key)    
{        
    String res = "";     
text = text.toUpperCase();  
for (int i = 0, j = 0; i <text.length(); i++) 
        {          
char c = text.charAt(i);      
if (c < 'A' || c > 'Z')       
continue;   
res += (char) ((c - key.charAt(j) + 26) % 26 + 'A');    
            j = ++j % key.length();  
        }         return res;  }  
public static void main(String[] args) 
{        
    Scanner s=new Scanner(System.in);
System.out.println("Enter the line : ");
    String message = s.nextLine();
System.out.println("Enter the key");
    String key = s.nextLine();   
    String encryptedMsg = encrypt(message, key);      
System.out.println("String: " + message);      
System.out.println("Encrypted message: " + encryptedMsg);
System.out.println("Decrypted message: " + decrypt(encryptedMsg, key));  
}
}

  
OUTPUT




  
RESULT
Thus the java program to implement Vigenere Cipher was executed and the output was verified.

       EX: 1e   SUBSTITUTION & TRANSPOSITION TECHNIQUES RAIL-FENCE ROW & COLUMN TRANSFORMATION


 AIM
To write a program for encryption and decryption using Rail-Fence row and column transposition technique.

ALGORITHM DESCRIPTION


  • In the rail fence cipher, the plaintext is written downwards and diagonally on successive "rails" of an imaginary fence, then moving up when we reach the bottom rail.

  • When we reach the top rail, the message is written downwards again until the whole plaintext is written out.

  • The message is then read off in rows.

PROGRAM
import java.io.*;
importjava.util.Scanner;
classRailFenceBasic
{
int depth;
String Encryption(String plainText,int depth)throws Exception
{
int r=depth,len=plainText.length();
int c=len/depth;
char mat[][]=new char[r][c];
int k=0;
String cipherText="";
for(int i=0;i<c;i++)
{
for(int j=0;j<r;j++)
{
if(k<len)
mat[j][i]=plainText.charAt(k++);
else
mat[j][i]='X';
}
}
for(int i=0;i<r;i++)
{
for(int j=0;j<c;j++)
{
cipherText+=mat[i][j];
}
}
returncipherText;
}
String Decryption(String cipherText,int depth)throws Exception
{
int r=depth,len=cipherText.length();
int c=len/depth;
char mat[][]=new char[r][c];
int k=0;
String plainText="";
for(int i=0;i<r;i++)
{
for(int j=0;j<c;j++)
{
mat[i][j]=cipherText.charAt(k++);
}
}
for(int i=0;i<c;i++)
{
for(int j=0;j<r;j++)
{
plainText+=mat[j][i];
}
}
returnplainText;
}
}
public class RailfenceCipher
 {
public static void main(String[] args)throws Exception
    {
RailFenceBasicrf=new RailFenceBasic();
Scanner scn=new Scanner(System.in);
int depth;
String plainText,cipherText,decryptedText;
System.out.println("Enter plain text:");
plainText=scn.nextLine();
System.out.println("Enter depth for Encryption:");
depth=scn.nextInt();
cipherText=rf.Encryption(plainText,depth);
System.out.println("\nEncrypted text is:  "+cipherText);
decryptedText=rf.Decryption(cipherText, depth);
System.out.println("\nDecrypted text is:  "+decryptedText);
}
}
OUTPUT


RESULT
Thus the java program to implement Rail-Fence  row and column transformation was executed and the output was verified.