Mozilla Campus Club Inauguration @ BITS,Warangal


, , , , , ,

Hello everyone,

It has been a long time, I have been writing here. This blog post will be about sharing my personal experience during the Mozilla Campus Club Inauguration at Balaji Institute of Technology and Science, Warangal, Telangana, India.

I have started contributing to Mozilla way back in 2012 and I am really elated that I have come a long way. All it started when a student from BITS contacted me on Facebook asking about the activities and opportunities that a student can be benefited. I told him about the Mozilla Campus Club Program to him and he got excited about it.

In the next few days, I explained him about Mozilla, its mission, vision, Campus Club Program, activities and how he can be a part of a global community. And then things went fast, we decided to inaugurate a Mozilla Campus Club in their college.

So, it was decided that we will be inaugurating a Mozilla Club on September 1st, 2017.

[1] Club Launch of Mozilla Campus Club BITS

[2] Addressed the audience on Mozilla, Communities and Opportunities


[3] The amazing crowd at Balaji Institute of Technology and Science.

[4] Press Coverage of the event.

Overall, it was an amazing experience talking to students. I have talked on Mozilla, products, Mozilla Campus Club Program, Activate Mozilla Campaign, Rust, WebVR and Firefox Nightly Campaign. Looking forward to organizing more events in future.


Best Regards,

Ajay Kumar Jogawath

Mozilla Representative


Hosting a Website into Azure using FTP


, , ,

Hello Everyone, in this blog you will understand how to host your existing website into azure.To host your website into Azure, you need to have an azure account. In case you don’t have you can always create by going to the Azure Portal ( )

Microsoft Azure makes the hosting of website much easier with a few single clicks. Once your account is ready you will go to Azure Portal. Here is how it looks like

Azure Portal

Requirements :

[1] Azure Account

[2] Existing Website

[3] Any Editor ( say Notepad)

Step 1 : Create a Web app using Azure Portal.

Now Click on New,Select Web + Mobile and then Web App. Fill out the details to create a web app.


If you check the checkbox “Pin to dashboard” , your web app tile will be displayed on the dashboard. Click on create to create a web app. Your app name will be concatenated with and it will be the URL for your web app. Once it is created click on the web app and you can see the details related to it.You can see here how it looks like.


Step 2 : Connect to Azure using FTP

If you click on that URL, it will open your currently created web app. We want our website to be displayed when it is hosted. So, now we will be try to host our website using FTP by uploading our files.


When you click on Get Publish Profile it will download a file with an extension PublishSettings. Open that file using a notepad and you will be able to see all the details related to FTP Settings,

Publish URL, username, password. These will be used to connect using ftp to the Azure.

So, open the file and you will be able to see publish URL as above

Publish URL =

Now copy the same URL and open it in the File Explorer.



You will be able to see a prompt to give your username and password. Enter the username and password by copying it from the PublishSettings File and click on Log In.


Step 3 : Copy your files into the folder connected to Azure Screenshot (32)

Screenshot (33)

Once the files are copied, go back to the azure portal and click on your web URL, you will be able to see your website. Let’s see what i am getting when i click on the URL.The URL is


Yay!! My Website got hosted. So, this is one simple way of hosting your website into Azure using FTP. Apart from this, we can also host your website using Visual Studio, Visual Studio Online, Web Matrix and Git.
Best Regards,

Ajay Kumar Jogawath,

R & D Engineer,


Hive India Makerfest’15 Ahmedabad


, , ,

Hi everyone!!!

I am back after a long time, this time its about my experiences regarding the event Maker Fest’15 Organized in CEPT University, Ahmedabad.

The Maker Fest Experience was a life time experience where we had learned, taught, shared and many more things happened here.


Lemme brief out you regarding the Maker Fest like what we had did over there, yes its we the mozillians who have been there to teach, to learn, to share.

On the first day, i met the amazing mozillians Sujith after a long time, Mayur, Prathamesh, Tripad, Abid for the first time . We arranged our booth/stall over there.  Then, we had the inauguration ceremony.





5 4

8 7 621


It was really a great time teaching thousands of students on webmaker tools, how to kickstart their contribution in Mozilla. They were excited to be a part of FSA Program and shown a lot of interest in developing Firefox OS Apps.





Introducing Hadoop – HDFS and Map Reduce


, , , , ,

Hi Friends,

In the last post, we have went through the History of Hadoop. In this blog we will understand about What is Hadoop ? What does it consists of ? and Where it is used?

  • The Hadoop platform consists of two key services: a reliable, distributed file system called Hadoop Distributed File System (HDFS) and the high-performance parallel data processing engine called Hadoop MapReduce.
  • Hadoop was created by Doug Cutting and named after his son’s toy elephant. Vendors that provide Hadoop-based platforms include Cloudera, Hortonworks, MapR, Greenplum, IBM and Amazon.

Data Distribution

  • Data distribution used in Hadoop is parallel processing and the file system used here is Distributed File System.

Advantages of Distributed File Systems are

[1] I/O Speed

[2] Less processing time

$ Imagine one single machine which is processing 1 TB of data. So, within some time it will process it. But what if the data is more? For example say 500TB?


If it takes like 45 min to process 1 TB data using traditional database, then what if the data is 500TB?

DFS time

It will take lot of time and processing speed will be decreased. So, in order to overcome this problem, we are going with DFS (Distributed File System).

High Level Architecture

Hadoop Architecture mainly consists of HDFS and Map Reduce

[1] Hadoop Distributed File System

[2] Map Reduce

HDFS is used for storage and Map Reduce is used for processing the large data sets.

Hadoop Distributed File System

Hadoop follows Master-Slave Architecture and it has 3 Master daemon and 2 Slave daemon

HDFS Architecture

  • Master daemon:

[1] Name Node

[2] Secondary Name node

[3] Job tracker

  • Slave daemon:

[1] Data Node

[2] Task tracker

  • Why HDFS?

[1] Highly fault tolerant (Power failures)

[2] High throughput (Reduce the processing time)

[3] Suitable for applications with large datasets

[4] Streaming access to file system data (write once, use many times)

[5] Can be built out of commodity hardware

  • Design of HDFS

File system designed for storing very large files, with streaming data access pattern, running clusters on commodity hardware.

  • Where HDFS is not used?

[1] Low latency data access

[2] Lots of small files

  • Daemon:

[1] It is a service, process running in the background.

[2] It is a term used in UNIX technology

  • Name node:

[1] Masters the system

[2] Maintains and manages the blocks which are present on data nodes

  • Data node:

[1] Slaves which are deployed on each machine and provide actual storage

[2] Responsible for sending read/write requests for the clients

job tracker

  • Secondary Name node:

[1] Name node stores the data in RAM

[2] Secondary name node stores the data in file system Ex: HDD

  • Edit log:

[1] It is a converter that converts the text data in the name node into images and stores them in the secondary name node

Map Reduce

It is an algorithm which is used in the Hadoop Framework for processing of the large datasets.

In order to understand the concept of Map Reduce and how it works, we need to see about few terms used in processing of the data.

[1] Data

[2] Input Splitter

[3] Record Reader

[4] Mapper

[5] Intermediate Generator

[6] Reducer

[7] Record Writer

[8] HDFS

  • So, we have seen about the architecture of the HDFS and now we will be discussing about the terms used in the Map Reduce Algorithm.
  • As we are processing the large data sets, the first term used in this process is DATA. The input data which we want to process is nothing but the data.
  • Once, we have the data in our File System nothing but the HDFS, we will be dividing/splitting into blocks.
  • This blocks are also called as Chunks. The default size of this block is 64MB. But it can be expanded to 128 MB to make the processing faster.
  • This means that, the input data is split into blocks based on the size we want using the Input Splitter.
  •  Next, we have a record reader which will map the data present into the block to the mapper. Remember the number of blocks will be equal to number of mappers. It means the HDFS will create the same no of mappers as the number of blocks or chunks. And it is done by Record Reader.
  •  Mapper will store the data present in the blocks. When it comes to actual definition, it is a class (specifically a Java Class) in which the initialization part is done.
  •  Intermediate Generator will collect all the data from the mappers and send to the reducer for the processing. Processing may include retrieving, inserting, deleting or any other kind of function or calculation.
  • Sometimes, we may have duplicate or the repetitive data, in that case intermediate generator will assign by doing short shuffling nothing but check whether if there is any input data, then collects and send it to the Reducer.
  • Reducer is also a Java Class which will process the data based on the code we write in the class. This will process the whole data and once the data is processed it will send to the File System nothing but the HDFS. This data is sent to HDFS using Record Writer.
  • So, what HDFS is?
  • HDFS is Hadoop Distributed File System which we discussed above, and it just stores the data. The Processing part of the data is done by Map Reduce.
  • Finally we can say that, HDFS and Map Reduce collectively make Hadoop for the storing and processing of data.

History of Hadoop and Map Reduce

Hi Guys!

This Post is to introduce you to Hadoop and Map Reduce. In our Previous Post we  have discussed about What is Big Data? What are its types ? and the reasons to learn Hadoop.

Lets see how Hadoop came into existence.

                                                       History of Hadoop

  • Even though the word “Hadoop” may be new to you, but it is already 10 years old now.
  • As every one has a history, our Hadoop also have a huge history.
  • The Story begins on a sunny afternoon in the year 1997, Doug Cutting, a Yahoo! Employee started writing the first version of Lucene

What is Lucene ?

  • Lucene is a text search library designed by Doug Cutting. This library was used for the faster search of web pages.
  • But after years, he experienced “Dead Code Syndrome”, so for better solutions he open sourced to Source Forge.
  • In 2001, it was made Apache Lucene and then focused on indexing the web pages.
  • Mike Cafarella, a graduate from University of Washington joined him to index the entire web.
  • This combined effort yielded a new Lucene sub-project called as Apache Nutch
  • An important algorithm, that’s used to rank web pages by their relative importance, is called PageRank, after Larry Page, who came up with it
  • It’s really a simple and brilliant algorithm, which basically counts how many links from other pages on the web point to a page. The page that has the highest count is ranked the highest (shown on top of search results). Of course, that’s not the only method of determining page importance, but it’s certainly the most relevant one.

Origin of HDFS

  • During this course of time, Cutting and Cafarella were facing four different issues with the existing file system

[1] Schema less (no tables and columns)

[2] Durable (once written should never lost)

[3] Capability of handling component failure ( CPU, Memory, Network)

[4] Automatically re-balanced (disk space consumption)

                                                                            Google’s Solution 

  • In 2003, Google published GFS Paper. Cutting and Cafarella were astonished to see solutions for the difficulties they were facing during this time.
  • Therefore, using this GFS Paper and integrating Java , they developed their own File System called as NDFS( Nutch Distributed File System).
  • But the problem of durability and fault tolerance was still not solved.
  • Thus, they came up with an idea of distributed processing and divided the file system into 64mb chunks and storing each element on 3 different nodes(replication factor) & set it default to 3.

Time for Map Reduce

  • Now, these guys needed an algorithm for the NDFS cause they want to integrate parallel processing nothing but running multi nodes at same time.
  • Thus in the year 2004, Google published a paper called as Map Reduce – Simple Data Processing on Large Clusters.
  • This algorithm have solved problems like

[1] Parallelization

[2] Distribution

[3] Fault-tolerance

Rise of Hadoop

  • In the year 2005, Cutting reported that Map Reduce is integrated into Nutch
  • In the year 2006, he pulled out the GFS, Map Reduce out of Nutch Codebase and named it Hadoop.
  • Hadoop included Hadoop Common(Core Libraries), HDFS and Map Reduce
  • Later Yahoo was facing the same problems and they employed Cutting again to transform their file system to Hadoop which saved Yahoo! actually.

Facebook, Twitter, LinkedIn…

  • Later, Companies like Facebook, Twitter, LinkedIn started using Hadoop
  • In the year 2008, Hadoop was still a sub project of Lucene. So, Cutting made it a separate project and licensed it under Apache Software Foundation.
  • Different Companies started noticing problem with their File System and started experimenting with Hadoop and were creating sub-projects like Hive, PIG, HBase, Zookeeper.

This is all about how Hadoop and Map Reduce came into existence. People say Hadoop is a new technology but it is already 10 years old.

Best Regards,

Ajay Kumar Jogawath

Research and Development Engineer

Big Data Evangelist

5 Reasons to Learn Hadoop


, ,

Hi Guys!

This Blog Post is to know the top most 5 reasons to learn Hadoop. Lets go through them one by one.


Big Data and Hadoop skill could mean the difference between having your dream career and getting left behind.
Dice has  quoted, “Technology professionals should be volunteering for Big Data projects, which makes them more valuable to their current employer and more marketable to other employers.”

Career with Hadoop

According to 90 executives who participated in the ‘The Big Data Executive Survey 2013’ conducted by NewVantage Partners LLC, supported by the Fortune 1000 senior Business & Technology executives, 90% of the organizations surveyed are already doing something with Big Data.

Hadoop skills are in demand – this is an undeniable fact! Hence, there is an urgent need for IT professional to keep themselves in trend with Hadoop and Big Data technologies. The above info graph shows how many organizations are influenced by Big Data and looking to implement them, if not already.

Big Data Implementation


More Job Opportunities with Apache Hadoop

  • Looking at the Big Data market forecast, it looks promising and the upward trend will keep progressing with time.
  • Therefore the job trend or Market is not a short lived phenomenon as Big Data and its technologies are here to stay.
  • Hadoop has the potential to improve job prospects whether you are a fresher or an experienced professional.


  • The Indian Big Data industry is predicted to grow five-fold from the current level of $200 million to $1 billion by 2015 which is 4% of the expected global share.
  • At the same time Gartner has predicted that there is going to be significant gap in job openings and candidates with Big Data skills.
  • This is the right time to take advantage of this opportunity. This skill gap in Big Data can be bridged through comprehensive learning of Apache Hadoop that enables professionals and fresher’s alike, to add the valuable Big Data skills to their profile.

Filled Jobs vs Unfilled Jobs


Look who is employing?

[3]Where to search for Jobs in Hadoop?

  • LinkedIn is the best place to get information on the number of existing Hadoop professional.
  • The above info graph talks about the top companies employing Hadoop professionals and who is leading of them all.
  • Yahoo! happens to be leading in this race.

According to Dice,

  • Dice has stated that “Tech salaries has nearly 3% bump last year and IT pros with expertise in big data-related languages, databases and skills enjoyed some of the largest pay checks.”
  • Dice has also quoted, “Technology professionals should be volunteering for Big Data projects, which makes them more valuable to their current employer and more marketable to other employers.”

Big Data and Hadoop equal bucks!

[4]Top Hadoop Technology Companies


Understanding Big Data


Hi Guys!

This Blog Post is about Understanding Big Data. We will be seeing through What exactly Big Data is? What are its types and what are its characteristics and the use-cases of big data.

Word Cloud

So, What is Big Data ?

Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, and information privacy.

During 1990’s,When the IT Organizations were evolving, the employees in the particular organizations used to generate the data.

Before 2000

Later in 2000’s, when social networking sites, e-commerce websites came into existence, even users started generating data.

after 2000multitasking-mobile-devices-660x429Now after 2010’s due to emerging smartphone technologies and motion sensor techniques even devices started generating data.

How much data is generated per minute ?

  • Facebook users share nearly 2.5 million pieces of content.
  • Twitter users tweet nearly 300,000 times.
  • Instagram users post nearly 220,000 new photos.
  • YouTube users upload 72 hours of new video content.
  • Apple users download nearly 50,000 apps.
  • Email users send over 200 million messages.
  • Amazon generates over $80,000 in online sales.

Data is generated from almost everywhere!

Data is generated from Healthcare, Multi-channel Sales, Finance, Log Analysis, Homeland Security, Traffic Control, Telecom,Search Quality, Manufacturing, Trading Analytics, Fraud and Risks and Retail.


Data generated per minute

Data generated by Hadron Collider

Data generated by Hadron Collider

Types of Data:

[1] Structured Data

[2] Unstructured Data

[3] Semi Structured Data

[1] Structured Data:

  • Your current data warehouse contains structured data and only structured data.
  • It’s structured because when you placed it in your relational database system a structure was enforced on it, so we know where it is, what it means, and how it relates to other pieces of data in there.
  • It may be text (a person’s name) or numerical (their age) but we know that the age value goes with a specific person, hence structured.

[2] Unstructured Data:

  • Essentially everything else that has not been specifically structured is considered unstructured.
  • The list of truly unstructured data includes free text such as documents produced in your company, images and videos, audio files, and some types of social media.
  • If the object to be stored carries no tags (metadata about the data) and has no established schema, ontology, glossary, or consistent organization it is unstructured.
  • However, in the same category as unstructured data there are many types of data that do have at least some organization.

[3] Semi Structured Data:

  • The line between unstructured data and semi-structured is a little fuzzy.
  • If the data has any organizational structure (a known schema) or carries a tag (like XML extensible markup language used for documents on the web) then it is somewhat easier to organize and analyze, and because it is more accessible for analysis may make it more valuable.

Example: Text ( XML, Email), Web Server logs and server patterns, sensor data

Characterization of Big Data:


3V’s of Big Data :Picture13


Picture14Some call it as 4V’s

Picture15Applications and Use-cases of Big Data:


Popular Use-cases :

[1] A 360 degree view of the customer :

  • This use is most popular, according to Gallivan. Online retailers want to find out what shoppers are doing on their sites — what pages they visit, where they linger, how long they stay, and when they leave.
  • “That’s all unstructured clickstream data,” said Gallivan. “Pentaho takes that and blends it with transaction data, which is very structured data that sits in our customers’ ERP [business management] system that says what the customers actually bought.”

[2] Internet of Things :

  • The second most popular use case involves IoT-connected devices managed by hardware, sensor, and information security companies. “These devices are sitting in their customers’ environment, and they phone home with information about the use, health, or security of the device,” said Gallivan.
  • Storage manufacturer NetApp, for instance, uses Pentaho software to collect and organize “tens of millions of messages a week” that arrive from NetApp devices deployed at its customers’ sites. This unstructured machine data is then structured, put into Hadoop, and then pulled out for analysis by NetApp.

[3] Data warehouse optimization :

  • This is an “IT-efficiency play,” Gallivan said. A large company, hoping to boost the efficiency of its enterprise data warehouse, will look for unstructured or “active” archive data that might be stored more cost effectively on a Hadoop platform. “We help customers determine what data is better suited for a lower-cost computing platform.”

[4] Big data service refinery :

  • This means using big-data technologies to break down silos across data stores and sources to increase corporate efficiency.
  • A large global financial institution, for instance, wanted to move from next-day to same-day balance reporting for its corporate banking customers. It brought in Pentaho to take data from multiple sources, process and store it in Hadoop, and then pull it out again. This allowed the bank’s marketing department to examine the data “more on an intra-day than a longer-frequency basis,” Gallivan told us.
  • “It was about driving an efficiency gain that they couldn’t get with their existing relational data infrastructure. They needed big-data technologies to collect this information and change the business process.”

[5] Information security :

  • This last use case involves large enterprises with sophisticated information security architectures, as well as security vendors looking for more efficient ways to store petabytes of event or machine data.
  • In the past, these companies would store this information in relational databases. “These traditional systems weren’t scaling, both from a performance and cost standpoint,” said Gallivan, adding that Hadoop is a better option for storing machine data.

Traditional Databases :

  • The relational database management system (or RDBMS) had been the one solution for all database needs. Oracle, IBM (IBM), and Microsoft (MSFT) are the leading players of RDBMS.
  •  RDBMS uses structured query language (or SQL) to define, query, and update the database.
  • However, the volume and velocity of business data has changed dramatically in the last couple of years. It’s skyrocketing every day.
  • Limitations of RDBMS to support “big data” :
  • First, the data size has increased tremendously to the range of petabytes—one petabyte = 1,024 terabytes. RDBMS finds it challenging to handle such huge data volumes.
  • To address this, RDBMS added more central processing units (or CPUs) or more memory to the database management system to scale up vertically.
  • Second, the majority of the data comes in a semi-structured or unstructured format from social media, audio, video, texts, and emails.
  • However, the second problem related to unstructured data is outside the purview of RDBMS because relational databases just can’t categorize unstructured data.
  • They’re designed and structured to accommodate structured data such as weblog sensor and financial data.
  • Also, “big data” is generated at a very high velocity. RDBMS lacks in high velocity because it’s designed for steady data retention rather than rapid growth.
  • Even if RDBMS is used to handle and store “big data,”  it will turn out to be very expensive.
  • As a result, the inability of relational databases to handle “big data” led to the emergence of new technologies.

MozCoffee Warangal


, , ,

Hello Everyone!

How are you all ? Hope you are doing good. This Post is about the recent Meetup “MozCoffee Warangal” Organized on August 9th 2015.

MozCoffee Warangal Badge

MozCoffee Warangal Badge

MozCoffee Warangal is a gathering of mozillians from Warangal to discuss about the future plans of the Mozilla Warangal Community with the existing contributors. Recruiting FSA’s mozillians and Firefox Clubs will be the main goal of this meet apart from the maker party event.

Event Link :

Venue : Jagruthi e Learning Center, 2nd Floor, GMR & GS Complex, Kishanpura, Hanamkonda, Warangal, Telangana, India.

The Meetup Started at 2:00 PM with eight attendees. 3 of them were very new and it was their first meetup. I started introducing myself and then introducing the attendees to everyone. We all had a quick intro session. After the intro, i introduced the Open Source, Mozilla, its Mission, Projects and Products.

Ajay Speaking on Open Source, Mozilla and its mission, Projects and Products

Ajay Speaking on Open Source, Mozilla and its mission, Projects and Products

Later, i spoke on the FSA Project and told them about forming a Firefox Club and how to organize activities in their club. Three attendees from KITSW Fox Club are already doing a great job and they have organized an event named KitZilla’15 in their Campus.

Tracy from KITS Fox Club Speaking on the Next Plans of their Club

Tracy from KITS Fox Club Speaking on the Next Plans of their Club

They were also connected with an Open Source Community Swecha and are planning for an event in their College to spread the Openness, firefox projects in the entire city.

Nipun, Sharing his experience with Mozilla Firefox Browser and his designing Contribution towards Community.

Nipun, Sharing his experience with Mozilla Firefox Browser and his designing Contribution towards Community.

So, we have discussed about what are the things can be done in the event and Sessions on Maker Party, TeachTheWeb, FirefoxOS and developing apps for it.

Finally, we had a MozQuiz, where we asked them to speak on their favorite topic of today’s meet. We gave some Swag as an encouragement to all the attendees.


Tracy and Pavan, FSA’s from KITSW Firefox Club

The new guys were very much excited about the next things in their college.

Outcome :

3 Active Contributors

3 New Contributors

2 Firefox Club Launch (to be done)

We also got an awesome Contributor Nipun Dumpala, who have been helping in making logos, badges and designs for the Mozilla Warangal Community. Thanks Nipun.

Nipun’s Work can viewed here

And Finally, we had a selfie 😀

A Selfie with Mozillians

A Selfie with Mozillians

Flickr Link : Click here for Photos

Best Regards,

Ajay Kumar Jogawath

Mozilla Representative,

Azure Blob Storage – Creating a Container || Upload, Download & Delete a Blob File


, , , , , ,

Hello Everyone,

This Post is to make you understand about how to quick start with Microsoft’s Azure Storage Services. Basically i am gonna explain you about how to create a container, then upload a blob file into the container, then download the blob file and finally deleting a blob file. We will be seeing this one after the other.

As many of them might be already knowing what a blob is? But let me tell you Blob stands for Binary Long Object which can store files of large size. It is again classified into Block Blob and Page Blob. Block Blob saves 200GB of files and Page Blob can stores 1TB of files.

So this sample is prepared on Windows 10 OS, Visual Studio 2013 with update 4. You also need to have an azure account.

OK, lets start with creating a container in the Azure Storage Account using a Windows Store app. Open Visual Studio basically it will be like this.

Visual Studio Home Page

Create a new project, windows apps, and give some name (say blobstoragedemo).

Create New Project
Some requirements in order to communicate with Microsoft Azure, we need to add some libraries.
Go to references and click on manage nuget packages. Search for Windows Azure Storage. You will get the first result as above and install the package.

[3] Nuget Packages

After downloading, click on I Accept to install the nuget packages.

[4] Accepting Licences

After accepting the above library files will be added into the project.

[5] Reference Files Added

Generally, in order to upload and download the files into a blob storage, we need to understand what exactly blob is and where it is stored.
In Microsoft Azure, we have Storage as one of the service under Data Services Category.
In this service, we create a storage account and then create containers to insert, upload, and download the files nothing but the blobs into the containers.
In this demo, we will be creating container from our application, then upload a blob, download a file and then delete a blob file from Azure.
OK, moving back to Visual Studio, now one more thing we need to take care is adding namespaces to the mainpage.xaml
In the Solution Explorer which is at the right side of the Visual Studio IDE, open mainpage.xaml.cs
You will find this one.
Now, add the above namespaces to the existing namespaces.

[6] Adding namespaces

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Auth;

Now, we need to create a container if there is no container in the Microsoft Azure. So in the Visual Studio click on mainpage.xaml and add a button. Using this button we will be creating a container in the Azure.

Give a name (nothing but a label) in the properties bar, content (say create container). Generate a button click event by double click on the button in mainpage.xaml page.

Open Microsoft Azure and then at the bottom we have a + button. Click on it to create a storage account.

[7] Creating Storage Account

After creating a storage account click on manage access keys to get the primary and secondary key associated with storage account.

[8] Access Keys - Primary Key associated with Storage account

Now, we have a storage account. Go to Visual Studio and generate a button click event and write the above code to create a container.

private async void createButton_Click(object sender, RoutedEventArgs e)
StorageCredentials sc = new StorageCredentials(“blobstoragedemo1”, “O2c3neazQlou0wm1q2/25EaaV7eHlj+9SeurjsvYQs5omkbaUh4+CqpjUtTVB603ydJ5fsm5foxZMOHEwWGlNQ==”);
CloudStorageAccount storageAccount = new CloudStorageAccount(sc,true);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(“democontainer”);
await container.CreateIfNotExistsAsync();
MessageDialog md = new MessageDialog(“Your Container is Created”);
await  md.ShowAsync();

What this code does is using the storage account credentials it will create a container with the name given, if there is no container already created.

Now, run your local machine and find whether a container is created!You can see that a container is created in the Azure Portal. This is how we have successfully created a container using a windows app.

Running the app - Creating a Container

Successfully created a container.

Container Visible on Azure

Now, let’s upload files into the container.

So, again go back to Visual Studio, open mainpage.xaml from the same project. Insert a button on the XAML Page and add a name “Upload” to the button.

Generate the button click event and insert the above code into the method.

private async void uploadButton_Click(object sender, RoutedEventArgs e)


StorageCredentials sc = new StorageCredentials(“blobstoragedemo1”, “O2c3neazQlou0wm1q2/25EaaV7eHlj+9SeurjsvYQs5omkbaUh4+CqpjUtTVB603ydJ5fsm5foxZMOHEwWGlNQ==”);

CloudStorageAccount storageaccount = new CloudStorageAccount(sc, true);

CloudBlobClient blobclient = storageaccount.CreateCloudBlobClient();

CloudBlobContainer container = blobclient.GetContainerReference(“democontainer”);

CloudBlockBlob blockblob = container.GetBlockBlobReference(“demo.jpg”);

FileOpenPicker openPicker = new FileOpenPicker();

openPicker.ViewMode = PickerViewMode.Thumbnail;

openPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;





StorageFile file = await openPicker.PickSingleFileAsync();

await blockblob.UploadFromFileAsync(file);


MessageDialog md = new MessageDialog(“File has been uploaded Successfully”);

await md.ShowAsync();


Now, run your application and you will see an upload button using which you can select the pic and then upload it to blob storage.

Once it is uploaded you will get a success message. To verify go to azure portal open the storage account and then container, you can see your blob file uploaded into the container.

Uploading Blob File

Picking File from Folder

Blob File Uploaded Successfully

Blob file uploaded into Container in Azure Portal

Go to mainpage.xaml and add one more button with name “Download” and adjust the font size as per your preferences. Now, generate a click event and write the code for the download in that event.

So the code for the Download will be the above one

private async void downloadButton_Click(object sender, RoutedEventArgs e)


StorageCredentials sc = new StorageCredentials(“blobstoragedemo1”, “O2c3neazQlou0wm1q2/25EaaV7eHlj+9SeurjsvYQs5omkbaUh4+CqpjUtTVB603ydJ5fsm5foxZMOHEwWGlNQ==”);

CloudStorageAccount storageaccount = new CloudStorageAccount(sc, true);

CloudBlobClient blobclient = storageaccount.CreateCloudBlobClient();

CloudBlobContainer container = blobclient.GetContainerReference(“democontainer”);

CloudBlockBlob blockblob = container.GetBlockBlobReference(“demo.jpg”);

FileSavePicker savePicker = new FileSavePicker();

savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;

// Dropdown of file types the user can save the file as

savePicker.FileTypeChoices.Add(“Picture”, new List<string>() { “.jpg” });

// Default file name if the user does not type one in or select a file to replace

savePicker.SuggestedFileName = “New Blob File”;

StorageFile file = await savePicker.PickSaveFileAsync();

await blockblob.DownloadToFileAsync(file);

MessageDialog md = new MessageDialog(“File has been Downloaded Successfully”);

await md.ShowAsync();


OK, once you run your app you will see a download button, click on it you will get a save option to save the blob file.

Thus, we have seen how to download a blob file from Azure.

Download Blob File using App from Azure

Saving the Blob File

File Downloaded Successfully

Go to mainpage.xaml and add one more button with name “Delete” and generate a click event. By clicking this button we will be able to delete a blob file from the Azure Storage or the Container.

After the design part, add the above code in the mainpage.xaml.cs file

private async void deleteButton_Click(object sender, RoutedEventArgs e)


StorageCredentials sc = new StorageCredentials(“blobstoragedemo1”, “O2c3neazQlou0wm1q2/25EaaV7eHlj+9SeurjsvYQs5omkbaUh4+CqpjUtTVB603ydJ5fsm5foxZMOHEwWGlNQ==”);

CloudStorageAccount storageaccount = new CloudStorageAccount(sc, true);

CloudBlobClient blobclient = storageaccount.CreateCloudBlobClient();

CloudBlobContainer container = blobclient.GetContainerReference(“democontainer”);

CloudBlockBlob blockblob = container.GetBlockBlobReference(“demo.jpg”);

await blockblob.DeleteAsync();

MessageDialog md = new MessageDialog(“File has been Deleted Successfully”);

await md.ShowAsync();


Now run your app and click on delete button to delete the file from the Blob Storage

This is how we create a blob storage account, then a container , then uploading, downloading and deleting blob files from container in a Azure Storage Account.

Delete Blob File

Blob File Deleted Successfully

My Kickstart with Edukinect



Hi Guys!

My name is Ajay Kumar Jogawath and i have joined Edukinect as a Research and Development Engineer on April 9th 2015. Edukinect is a leading Microsoft Academic Partner, working across technology domains on products for Education, Healthcare, HCI and many more. We also work with more than 150 campuses across India, helping them in driving technology and innovation at grass-root level catering more than 15000 students.

My responsibilities here will include Developing Mobile Applications for Windows Phone, working with Microsoft Azure Services and integrating with Mobile Applications, Big Data and Hadoop. I also deliver sessions on the above technologies at various colleges and Universities across India.

This Post is an introductory post and will write on different technologies that i will be learning during my Journey with Edukinect. Thank You!

Best Regards,

Ajay Kumar Jogawath,

Research and Development Engineer