Wednesday, May 23, 2018

Data Analysis from MongoDB using R

Most of us are aware of R, is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical softwares and data analysis. If we empower R with proper datasets and sources it would be the icing on the cake, so in this post we are going to see how, R would be connected to the MongoDB and how one can apply R power or datasets from MongoDB.

Prerequisites for this demo, you should have MongoDB daemon up and running on server or on your local machine(pseudo distributed mode) 

Start your R instance and install "rmongodb" packages by issuing below command(s)

        $  install.packages("rmongodb")
        $  library(rmongodb)

connect R with MongoDB instance
   
       $ mongo.create(host = "127.0.0.1", name = "", username = "", password = "", db = "test", timeout = 0L)

you'll get response as below, using above connection configuration you are connecting to the mongo instance on 127.0.0.1 to the 'test' mongo database with empty username and password.

        [1] 0
        attr(,"mongo")
        <pointer: 0x0884f0a8>
        attr(,"class")
        [1] "mongo"
        attr(,"host")
        [1] "127.0.0.1"
        attr(,"name")
        [1] ""
        attr(,"username")
        [1] ""
        attr(,"password")
        [1] ""
        attr(,"db")
        [1] "test"
        attr(,"timeout")
        [1] 0   

you can check by issuing below command, whether R is connected to MongoDB or not.

        $ mongo.is.connected(mongo)
        [1] TRUE

Now your R is successfully connected to MongoDB instance to test database, so you can easily fire a simple mongo queries and use R's power to calculate analytics over mongoDB datasets.

for example to get simple one record from Mongo

        $ mongo.find.one(mongo,"test.zip",list())

we can also use filter queries to fetch records from MongoDB into R datasets,

        $ mongo.find(mongo, "test.zip", list(pop=list('$gt'=21L)))

So, this just a beginning stay tuned for the next updates.
Thanks for visiting, I'll appreciate your thoughts and comments

Sunday, November 12, 2017

Serialization Frameworks(Protocols), When and What?

Technically, there are 2 important comparison differences between below protocols:

- Statically typed or dynamically typed
- Type mapping between language's type system and serializer's type system (Note: these serializers are cross-language)

The most understandable difference is "statically typed" vs "dynamically typed". It affects that how to manage compatibility of data and programs. Statically typed serializers don't store detailed type information of objects into the serialized data, because it is explained in source codes or IDL. Dynamically typed serializers store type information by the side of values.

- Statically typed: Protocol Buffers, Thrift
- Dynamically typed: JSON, Avro, MessagePack, BSON





Generally speaking, statically typed serializers can store objects in fewer bytes. But they they can't detect errors in the IDL (=mismatch of data and IDL). They must believe IDL is correct since data don't include type information. It means statically typed serializers are high-performance but you must strongly care about compatibility of data and programs.
Note that some serializers have original improvements for the problems. Protocol Buffers store some (not detailed) type information into data. Thus it can detect mismatch of IDL and data. MessagePack stores type information in effective format. Thus its data size becomes smaller than Protocol Buffers or Thrift (depends on data).

Type systems are also important difference. Following list compares type systems of Protocol Buffers, Avro and MessagePack:

- Protocol Buffers: int32, int64, uint32, uint64, sint32, sint64, fixed32, fixed64, sfixed32, sfixed64, double, float, bool, string, bytes, repeated, message
- Avro: int, long, float, double, boolean, null, float, double, bytes, fixed, string, enum, array, map, record
- MessagePack: Integer, Float, Boolean, Nil, Raw, Array, Map (=same as JSON)

Serializers must map these types into/from language's types to achieve cross-language compatibility. It means that some types supported by your favorite language can't be stored by some serializers. Or too many types may cause interoperability problems. For example, Protocol Buffers doesn't have map (dictionary) type. Avro doesn't tell unsigned integers from signed integers, while Protocol Buffers does. Avro has enum type, while Protocol Buffers and MessagePack don't have.

It was necessary for their designers. Protocol Buffers are initially designed for C++ while Avro for Java. MessagePack aims interoperability with JSON.

Some of the advantages and disadvantages of the Serialization frameworks

1. XML
Advantage of the XML is human  readable/editable, extensibility, interoperability, XML provides a structure to data so that it is richer in information, XML is easily processed because the structure of the data is simple and standard and There is a wide range of reusable software available to programmers to handle XML so they don't have to re-invent code. XML provides Many views of the one data. XML separates the presentation of data from the structure of that data. It is standard for SOAP etc.

XML use Unicode encoding of the data. XML can be parsed without know schema in advance.

2. JSON
JSON is much easier for human to read than XML. It is easier to write, too. It is also easier for machines to read and write. JSON also provides a structure to data so that it is richer in information and easy processing. JSON has better data exchange format, JSON would perform great for correct usecase. JSON schema and structures are based on arrays and records into the JSON Object. JSON has excellent brower support and less verbose than XML.

As same as XML, JSON also use Unicode encoding format of the data. Advantage of the JSON over XML is the size of the message is much smaller than XML and JSON readability/editability. JSON can be parsed without know schema in advance.

JSON is just beginning to become known. Its simplicity and the ease of converting XML to JSON makes JSON ultimately more adoptable.

3. Protocol Buffer
Protocol Buffer has small output size, very dense data, but very fast processing. It is hard to robustly decode without knowing the schema. (data format is internally ambiguous, and needs schema to clarify) Only machine can able to understand is, not intended for the human eys. (dense binary)

Protocol Buffer has full backword compatibilty, and requires less boilerplate code for parsing as compared to JSON or XML.

4. BSON(Binary JSON)
BSON can be compared to binary interchange formats, like Protocol Buffers. BSON is more "schema-less" than Protocol Buffers, which can give it an advantage in flexibility but also a slight disadvantage in space efficiency (BSON has overhead for field names within the serialized data).

BSON is Lightweight, Keeping spatial overhead to a minimum is important for any data representation format, especially when used over the network. Traversable, BSON is designed to be traversed easily. This is a vital property in its role as the primary data representation for MongoDB.

Efficient, Encoding data to BSON and decoding from BSON can be performed very quickly in most languages due to the use of C data types.

5. Apache Thrift
Apache Thrift provides, Cross-language serialization with lower overhead than alternatives such as SOAP due to use of binary format, A lean and clean library. No framework to code. No XML configuration files. The language bindings feel natural. For example Java uses ArrayList<String>. C++ uses std::vector<std::string>. The application-level wire format and the serialization-level wire format are cleanly separated. They can be modified independently. The predefined serialization styles include: binary, HTTP-friendly and compact binary and Doubles as cross-language file serialization.

Thrift does not require a centralized and explicit mechanism like major-version/minor-version. Loosely coupled teams can freely evolve RPC calls. No build dependencies or non-standard software. No mix of incompatible software licenses.

6. Message Pack
The tagline of Message Pack says 'It's like JSON. but fast and small.' Message Pack is not human readable as it stores data into binary format.

7. Apache AVRO
Nowadays Apache AVRO becoming popular into the industry is because of the message size and evolving schemas feature.

Schema evolution – Avro requires schemas when data is written or read. Most interesting is that you can use different schemas for serialization and deserialization, and Avro will handle the missing/extra/modified fields. Untagged data – Providing a schema with binary data allows each datum be written without overhead. The result is more compact data encoding, and faster data processing. Dynamic typing – This refers to serialization and deserialization without code generation. It complements the code generation, which is available in Avro for statically typed languages as an optional optimization.

Tuesday, July 18, 2017

Music Analytics Opportunities

Once upon a time the words music listening habits were private as their bedrooms, music lovers used to buy the CDs, recordings and other physical copies of music and never publicly shared, Record companies were aware which radio station played their songs and where their CDs were popular, but that information painted an incomplete picture at best. Who knew what music people were sharing on tapes and CDs burnt in the privacy of their own bedrooms?

A traditional business metrics like number of CDs were sold and nothing happened after that, who purchased what and whom to assist what to buy, all this was anonymous. Thats all changed after explosion of online music sources like torrenting, music streaming sites and social media platforms, are now playing a very key role for music industry to understand their fans, spot upcoming talents like never before and anyones personal music interest nowadays becoming a public. Music analytics is now worth around $24.35 billion per year.

Image result for Music Analytics Opportunities

At the same time that the internet is taking power away from record labels, it is also giving them the ability to predict future hits.

Sunday, March 12, 2017

My First Publication: YARN Essentials

 Image result for yarn essentials
If you have a working knowledge of Hadoop 1.x but want to start afresh with YARN, this book is ideal for you. You will be able to install and administer a YARN cluster and also discover the configuration settings to fine-tune your cluster both in terms of performance and scalability. This book will help you develop, deploy, and run multiple applications/frameworks on the same shared YARN cluster.


YARN is the next generation generic resource platform used to manage resources in a typical cluster and is designed to support multi-tenancy in its core architecture. As optimal resource utilization is central to the design of YARN, learning how to fully utilize the available fine-grained resources (RAM, CPU cycles, and so on) in the cluster becomes vital.


This book is an easy-to-follow, self-learning guide to help you start working with YARN. Beginning with an overview of YARN and Hadoop, you will dive into the pitfalls of Hadoop 1.x and how YARN takes us to the next level. You will learn the concepts, terminology, architecture, core components, and key interactions, and cover the installation and administration of a YARN cluster as well as learning about YARN application development with new and emerging data processing frameworks.


Thank you!

Big Businesses with Big Opportunities

Nowadays Businesses are struggling with abnormally growing volumes, speed and variety of information that used to generate everyday, everyday the complexity of information generation is also rapidly growing - the term to be known for as 'Bigdata'. Many companies are seeking for the technology to not only help them to bigdata storage and process but also finding many more business insights from bigdata and growing up the business strategies with bigdata.

Arround 80% information in world is unstructured, and many businesses are not even attempting to use  that information for their advantage or not aware how to use that information. Imagine if you and your business keep afford that all data generated by you business and keep tracking and analyzing it, Imagine if know to way the handle that bigdata?

The data explosion presents great challenge to businesses, today most lack the technology and knowledge about bigdata and how to deal with it and get real business values. Many Companies are focusing on the developing skills and insights of business needs to accelerate the path of transforming larger data sets. 
What bigdata can do? Businesses are growing up with bigdata to finding more business insights and row that caries values for business with latest bigdata processing technologies like Hadoop fromework.
Its now possible to track each individual user through cell phones, wireless sensors with measurement of his interest in particular thing, where does he lives, works, plays and what is his day to day program and collect the data, analyse this huge data using bigdata processing technologies and find out the business ways with each individual user to help or make his life simpler.
Bigdata in Social Networking, day to day millions of facebook comments, updates, twitter tweets are generating and many more so using bigdata processing to find out current market trends, what people are talking about, their likes, dislikes accordingly plan our business.
Bigdata in Healthcare, every hospital or healthcare organization maintaining their historical records with patients records which may kind of bigdata so technology can analyse that past records and predict in future which patients, on what date, with the cause and what are the possible treatments for similar cause.
Bigdata in BFSI, In BFSI domain fault tolerance is the one of the most important pillar so, there are millions of daily banking transactions are there we want to find out the fake transactions, bigdata helps us even for product recommendations, transaction analysis bigdata plays a major role.
Bigdata in  ECommerce, somewhere and somehow on online shopping sites you might seen dialogs like 'you bought this you may like this', this is kind of recommendations calculated by bigdata processing technologies.

The information that we have today, about 90% of information is generated in just last 2 years, I believe after 2020 there will about 70% businesses in world would wont be having any other option. Products would be delivered to the customer if he just think buying about it, Cab will arrive when people think of going out and discounts would be already there on Shirt we might think to buy.

Saturday, February 11, 2017

Recommendations with Apache Mahout

Recommendation?

Have you ever been recommended a friend on Facebook? Or visited a shopping portal where you can see the recommended items for you, Or an item you might be interested in on Amazon? If so then you've benefited from the value of recommendation systems.
for example, often see personalized recommendations phrased something like, “If you liked that item, you might like also like this one...” These sites use recommendations to help drive users  to other things they offer in an intelligent, meaningful way, tailored specifically to the user and the user’s preferences.

Recommendation systems apply knowledge discovery techniques to the problem of making recommendations that are personalized for each user. Recommendation systems are one way we can use algorithms to help us sort through the masses of information to find the “good stuff” in a very managed way.

From an algorithmic standpoint, the recommendation systems we’ll talk about today are considered in the k-nearest neighbor family of problems (another type would be a SVD-based recommender). We want to predict the estimated preference of a user towards an item they have never seen before. We also want to generate a ranked (by preference score) list of items the user might be most interested in. Two well-known styles of recommendation algorithms are item-based recommenders and user-based recommenders. Both types rely on the concept of a similarity function/metric (ex: Euclidean distance, log likelihood), whether it is for users or items.

Overview of a recommendation engine

The main purpose of a recommendation engine is to make inferences on existing data to show relationships between objects and entities. Objects can be many things, including users, items, products(in short user related data) and so on. Relationships provide a degree of likeness or belonging between objects. For example, relationships can represent ratings of how much a user likes an item, or indicate if a user bookmarked a particular page.

To make a recommendation, recommendation engines perform several steps to mine the data(Data mining). Initially, you begin with input data that represents the objects as well as their relationships. Input data consists of object identifiers and the relationships to other objects.


Consider the ratings users give to items. Using this input data, a recommendation engine computes a similarity between objects. Computing the similarity between objects(co-similarity) can take a great deal of time depending on the size of the data or the particular algorithm. Distributed algorithms such as Apache Hadoop using Mahout can be used to parallelize the computation of the similarities. There are different types of algorithms to compute similarities. Finally, using the similarity information, the recommendation engine can make recommendation requests based on the parameters requested.

For Example:
GroupLens Movie Data

The input data for this demo is based on 1M anonymous ratings of approximately 4000 movies made by 6,040 MovieLens users, which you can download from the www.grouplens.org site. The zip file contains four files:

movies.dat (movie ids with title and category)
ratings.dat (ratings of movies)
README
users.dat (user information)

The ratings file is most interesting to us since it’s the main input to our recommendation job. Each line has the format:
Ratings.dat description

UserID::MovieID::Rating::Timestamp

So let’s adjust our input file to match what we need to run our job. First download the file and unzip it locally from:

Next run the command:
        tr -s ':' ',' < ratings.dat | cut -f1-3 -d, > ratings.csv

This produces the csv output format we’ll use in the next section when we run our “Itembased Collaborative Filtering” job.

        hadoop fs -put [my_local_file] [user_file_location_in_hdfs]

this command put  input file on HDFS,

create user.txt file which stores the data(userID) of the users to which we want show recommendations.
put it on HDFS under users directory.
With our user list in hdfs we can now run the Mahout  recommendation job with a command in the form of:
     
       mahout recommenditembased --input [input-hdfs-path] --output [output-hdfs-path] --tempDir [tmp-hdfs-path] --usersFile [user_file_location_in_hdfs]

which will run for a while (a chain of 10 MapReduce jobs) and then write out the item recommendations into HDFS we can now take a look at.  If we tail the output from the RecommenderJob with the command:

         hadoop fs -cat [output-hdfs-path]/part-r-00000

The output will show the user(provided into user.txt) with the recommended items.

Tuesday, December 20, 2016

Apache Spark, Apache Flink & Apache Strom

Apache Spark: Apache Spark is a batch processing engine that emulates streaming via microbatching. It has a well developed ecosystem and incorporates besides a Scala and Java API a Python and R library as well. Apache Spark very well integrates with Apache Hadoop Ecosystem components.

Apache Flink: Apache Flink is streaming dataflow engine. It can be programmed in Scala and Java. You can emulate batch processsing, however at its core it is a native streaming engine. Flink shines by features under the hood, such as exactly once fault management, high throughput, automated memory management and advanced streaming capabilities, Apache Flink also very well integrates with Apache Hadoop Ecosystem.

Apache Storm: Is a technology created by Nathan Marz. Compared to Flink and Spark, it has a compositional API. Meaning you build up your own topology with basic building blocks like sources or operators(spouts and bolts) and they must be tied together in order to create topology(program flow).



For more detailed comparison for all other streaming and batch processing frameworks, drop me a reply here, I'll try to reply as earliest.

Wednesday, July 6, 2016

Social Networking Analysis

Around 85% internet users around the world (1.59 billion monthly active users in 2016, representing a 25.2 percent increase over last years figures. Eighth-ranked instagram had over 400 million monthly active accounts.) uses a online social networking portals like facebook, whatsapp, tweeter, youtube to share their experiences and to get familiar with what’s happening around us. Facebook alone reported with 1.55 billion monthly active users. As any new product lunches in any industry we can found the real experiences provided by product user on these portals. Nowadays social networking plays a very important role in Business analysis and locating business lagging and growing areas, which helps businesses to create a business strategy to improving lagging areas and maintaining the qualities of growing areas.

Apache Hadoop and Spark plays a very important role in Bigdata collection, processing and providing a nearly real time analytics out of it.

Sunday, June 26, 2016

Capitalizing Bigata.!!!

90% of data created today is unstructured and more difficult to manage that generating from data sources like social media(facebook, twitter), video(youtube), texts(application logs), audio(viacom), email(gmail), and documents. As all of know the social media becoming revolutionary factor for businesses.

 
Bigdata is much more than data and is already transforming the way businesses and organizations are running. It represents a new way of doing business, creating a bright path for future business world, one that is driven by data oriented decision making and new types of products and services influenced by data. The rapid explosion in Bigdata and ways to handle it, changing the landscape of not only IT industry but all over the data oriented systems, And this data is becoming so powerful and important to drive for today’s businesses, as it contains customer insight and business growth opportunities that have yet to be identified or even no one had a idea about. But due to its volume, type and speed of change, most companies are doesn't have enough resources  to address this valuable data and get business out of it. 

Its time to get together and find out the ways and patterns from bigdata that can help us to make our lives even simpler and we have the solution(Hadoop) but need to explore it more, to focus on true growth and identifying the business opportunities.

Tuesday, May 24, 2016

Hazelcast : In-Memory NoSQL Solution

If you are evaluating high-performance NoSQL solutions such as: Redis, Riak, Couchbase, MongoDB, Cassandra etc. or in even rarer cases if you’re evaluating caching solutions such as Memcached or EHcache, it’s possible that your best choice may be Hazelcast: Hazelcast uses a considerably different approach to any of the above projects, and yet for some classes of people looking for a Key-Value store, Hazelcast may be the best option for you.




So lets look at Hazelcast and why it is better alternative for above mentioned systems, Hazelcast is an In-Memory Data Grid, not a NoSQL database.
advantages and disadvantages of using Hazelcast in-memory solution, first of all as being key-value store into the memory, it has some default advantages are speed and read efficiency but also some natural disadvantages as storing map into memory are scalability and volatility, as we know that size of RAM is always less than the total available disk space, as RAM is more expensive than the disk, so being in-memory store space is always been a constant and second one as being flash storage, RAM refreshes itself on process restart, resulting into the data loss. Hazelcast addressing this shortcomings of the in-memory stores by providing efficient and convenient solution.
In Hazelcast, scalability issue is addressed by clustering solution, as joining hundreds nodes into the cluster, we may aggregate more than terabytes of in-memory space, to accommodate Hazelcast map into the memory. Of course this not going to be compared with disk space as its been 100 times more than memory but depending upon use case some terabytes of space is sufficient for the in-memory operations or else this solution may use with some backend data store.

Volatility, in Hazelcast volatility handled by peer to peer data distribution, so every block of data has multiple copies(replication across the cluster) of it present on different locations, as any node/rack goes down because of some issue, we can recover data from other copies present on other locations. The number of backup copies can be configured as depending upon the data criticality. Too much copies of the data into the cluster may reduce the overall availability of working memory to other operations. Hazelcast addresses the tuning problem of the cluster by providing ways to make your cluster available and reliable, including setting up backup process for example servers on one rack would be set to back up on another rack, so failure of entire rack can be managed gracefully.
Hazelcast also address the Rebalancing problem of the cluster, whenever node is added/removed to/from the cluster can be lead to moving data across the cluster to rebalancing it. If the node crashes because of some issues, the data copy(primary) on dead node has to be re-owned by the another node who is having replica of that data copy(secondary) becoming primary and backed up on another node to make cluster fail-safe again. this process may consume more cluster resources like CPU, RAM, network etc. might lead to latency into the process during this whole process.
Also in addition to the above benefits, Hazelcast also make sure that Java Garbage Collection process should not be having any effect on the terabytes of data stored onto memory specifically on heap as your heap gets bigger, garbage collection might cause delay in your application response time. So memory store is Hazelcast with native storage support to avoid garbage collector to causing delay in application; resulting in more efficiency and throughput.

Followers