Posts Tagged ‘Technologies’

There is another new browser I have recently tried. This comes from the Netscape group and has coupled with Facebook. I have been using Facebook a lot more extensively, for the last couple of days, and that is why I am have started liking this new browser. Here is the original RockMelt site. I am also putting down the article that came on the Yahoo News:

“A new internet browser that requires a Facebook log-in has been unveiled, aimed squarely at social networking users.

Called RockMelt, it has been set up by Marc Andreessen, the founder of Netscape.

Based on Google’s Chromium software, Rockmelt is designed to let users share everything they do with the friends on Facebook and Twitter.

Down the side of each web page visited is a selection of each user’s most-used Facebook Friends and Twitter contacts, reports the Daily Mail.

A statement on the firm’s blog read: ‘With RockMelt, we’ve re-thought the user experience, because a browser can and should be about more than simply navigating Web pages.

‘Today, the browser connects you to your world. Why not build your world right into your browser?’

The browser makes it particularly easy to share links with friends by dragging pictures, URLs or videos onto one of the small photos, known as ‘edges’ that line the browser’s window.

Because you have to sign in before using Rockmelt, all of your favourite sites, blogs and friends are listed when you log in.

The browser alerts you when a new story, video, or post appears on the sites you visit the most, without you having to leave the webpage you are currently on.

However, the fact that a user’s entire web search history, friends and favourite sites are known by RockMelt, will alarm those wary of handing over personal information to tech firms.

The firm claims that this browsing information will not be sold to advertisers.

‘We are not going to run an ad network. We actually don’t know where you go,’ co-founder Tim Howes told website TechCrunch. ‘That information does not leave your browser.’

The way you search on the internet is also different with RockMelt. Instead of a whole page of search results from Google, only the first 10 results come up, displaying the web page of each result before you click on it.”

Though this video feature is not a new one in Gmail, in fact if I remember it correctly it is almost one and a half two years old. But this is the first time I have used the feature. Though it is not the first time I am using a video chat. I have used it a several times on Skype. I used to use it regularly to chat almost for three years a couple of years back (amazingly during the times when we did not have the fast broadband internet, and also during the times when laptops were not very common, to use Skype one had to install a webcam, and yes it was not easy). Though for the past few months or almost a year I have been using Skype in office for client calls. In fact it was my client to insist on having the status meeting on Skype rather than over a phone call. With all this all what I want to say is that I am neither new to video chatting nor is this feature new in Gmail, so why I am suddenly writing about this now.

Well this is because this is the first time I have used this feature on Gmail, and to honest I am very much impressed. I liked the overall performance, especially because it does not add any load on the internet, since it only uses the Gmail (I meant no extra load for video is applied on the internet service). Also, one does not need to create a separate account; download and install separate software, also always log onto the software every time. Since Gmail is one of the most popular mailing applications, it is easier to find friends over here rather than on Skype or any other application.

Now coming onto the technical perspective. I agree that the video quality is not as good as the one of Skype, but it is not bad either. It is good enough for a single person to talk. Also as I have written earlier the performance (I mean the voice quality, the audio quality) is very good, comparable to Skype, or I can say even better. So overall I would rate this feature as a good one, and also recommend others to use this, especially the ones who have not used Skype so far and want to use the video chatting facility.

I now feel why is it that I have not used this feature for such a long time. This is maybe because I have not used the video chat feature for personal use for almost two years now. But I wish I could have used this earlier as well. But as they say “Better late than never”.

In the end again I would recommend others to use this feature of Gmail, as it is really good, or I can say more than good, it is really more comfortable.

The appraisals are just round the corner in my company and I guess that will be the case in most of the companies. Another common thing in the appraisals is the 360-Degree review. At least most of the companies are following this policy. Well this is a much used policy since it has got a lot of positives, and if correctly implemented can be very useful as it breaks the barrier of juniors and seniors in the company and makes them come up with their points and thoughts much more freely. Though this policy does have a few negatives as well but its positives are so many and equally effective that this policy has become highly preferred. But in all this one must not forget the negatives that this policy can apply. So in this post I will be more concentrating on the negatives in this system.

As I had written earlier that the appraisals are just round the corner, I also happened to receive a very good e-mail regarding how to bounce back from a negative 360-Degree review, a few a days back. So I thought that it is really very good time to share this information. It is very helpful even one can keep all the points in mind. Though I don’t think that I would probably need it in my case but keeping the points in mind is not bad at all. If anything it will only help ones case.

Here is the complete article:

Bouncing Back from a Negative 360-Degree Review
Unlike traditional reviews and other types of feedback, 360-degree reviews include input from a comprehensive set of people: peers, managers, direct reports, and sometimes customers. One of the most valuable aspects of this tool is that the opinions are voiced anonymously, which encourages a higher level of honesty than you might normally get. However, the truth is not always pretty, and receiving a negative 360-degree review can be upsetting, especially when the opinions are echoed at many levels. But with the right attitude, you can still create a positive experience. How you handle a bad 360-degree review is far more important than the content of the review itself.

What the Experts Say
Before you begin the 360-degree review process, it’s important to have an open mindset. Remember that no one is perfect and every manager, no matter how seasoned, has room to improve. “The best leaders aren’t those who don’t have a lowest score on a 360. The best leaders have standout strengths,” says Susan David, co-director of the Harvard/McLean Institute of Coaching, founding director of Evidence Based Psychology LLC, and a contributor to HBR’s The Conversation blog. It’s your job to figure out what to do about those low scores. Larissa Tiedens, the Jonathan B. Lovelace Professor of Organizational Behavior at Stanford Business School and co-editor of  The Social Life of Emotions agrees. “Being reflective and changing after a negative review is often more impressive than getting positive reviews from the start. Thus, a negative review is an opportunity to show that you can listen and learn,” she says. Here are several principles to follow if you receive a less than stellar 360-degree review.

Reflect before reacting
After you receive the feedback, let the results sink in before you do anything. “Sometimes people want to respond too quickly before they have sufficiently reflected upon it,” says Tiedens. Try not to be defensive. “Receiving feedback can bring our most vulnerable and self-critical parts to the fore,” says David. Counter this instinct by asking questions and being sympathetic with yourself and those who gave feedback. “The stance that is most helpful in receiving feedback is when you consciously try to draw on your curious and compassionate parts — those aspects of you that genuinely want to learn, hear, and understand,” says David. Once you’ve taken time to process it, ask yourself whether the feedback rings true. Does it echo what you’ve heard in past reviews or from other people in your life, including those outside of work? Sometimes it can be helpful to talk with a colleague, your manager, or a mentor and get an additional perspective from someone you trust.

Avoid a witch hunt
While 360-degree reviews are intended to be anonymous, it is sometimes easy to tell who said what from the comments. It may be difficult to resist doing this type of deciphering, however, you should resist the temptation to reach out to your reviewers and address their input. “Typically, the respondents provide their feedback with the understanding that they won’t be sought out to discuss their individual comments, so you risk harming the process and the general level of trust if you try to discover the individual source,” says Tiedens. Rusty O’Kelley, a partner at Heidrick & Struggle’s Board Consulting and Leadership Consulting Practices who has conducted hundreds of 360-degree reviews as part of his work on CEO succession planning and transition management, echoes this point. “It’s important to protect the people who gave you feedback so that they can be honest. Where 360s often fail is when people are diplomatic instead of straightforward,” he says.

Decide what to respond to
Remember that the review is made up of opinions. This means you don’t have to react to everything. A 360-degree review is different from a formal review by your boss in that you aren’t obligated to address the feedback. Instead, be selective about what you are going to change. Responding to every piece of feedback would be a colossal waste of time. “Just as you wouldn’t rush out and replace your car because someone didn’t approve of it, it isn’t necessary to rush out and try to change yourself and doctor your personality or behavior because of a piece of negative feedback on a 360,” says David. Instead, she suggests that leaders use three criteria to decide when to attend to a low score:

  1. Is this a consistent problem? Has it come up in previous reviews and from different raters?
  2. Is the problem a fatal leadership flaw? Does it point to lack of integrity, authenticity, or honesty?
  3. Is it incongruent with your values? Does it conflict with the type of leader you want to be? “Your values are your anchor and they should inform the leadership principles that you try to live up to,” she says

Many 360-degree review tools cluster feedback according to its source, whether it comes from direct reports, peers, customers, etc. Take note of what level the feedback is coming from. “In some ways, it is even more important to be responsive to what you hear from those lower in the hierarchy,” says Tiedens. “Subordinates took a bigger risk in raising these issues and have fewer avenues to discuss them with you, which suggests that these things are really bugging them and may mean they are even more confident of their views.”

Commit to change
When making a plan to change, focus on the future. Don’t start immediately altering things that will make you feel better now. Often this won’t help you achieve your goals in the long term. “While the pull of bad is stronger than good, if you are choosing an area to develop you might be better served by attending to an average score rather than your lowest score,” says David. It is unlikely, even with a great degree of work, that you will be able to move a low score to an off-the-chart strength. “Think about concrete behaviors you can engage in that would be responsive to negative feedback,” says Tiedens. David suggests creating mini-experiments where you choose one or two focus areas and create opportunities to try out a new behavior or way of being. Ask yourself: What’s the smallest thing I can do that will make the biggest difference? Then, once you’ve done that small thing, assess how it went. “Start developing proof points that show it will work,” says David. This is the foundation for change.

Talk with your manager or team
“The instinct is to hide and not talk about it, but since everyone participated, they are anticipating that some things will change,” says O’Kelley. Talk with your team and share an overview of the feedback you received. “You don’t need to give them every data point, but a general characterization of what the feedback said, both positive and negative, can be very useful for your team to hear,” says Tiedens. Make a commitment to your team or your manager as to what you are going to change and how. To keep you focused and to include them in the process, invite them to call you out when you aren’t living up to your promises.

How to handle outliers
Sometimes it’s clear from your 360-degree review that only one or two people had a certain negative opinion. Instead of completely dismissing that feedback, it’s important to reflect on it. It’s possible that others agree with the feedback but were afraid to express it in the assessment. If you have an outlier critique, do more research and try to assess whether it holds any truth. Then apply David’s three criteria from above to decide whether it deserves a reaction.

Principles to Remember

  • Remember that feedback — positive or negative — is an opportunity to see your leadership in new light
  • Ask yourself what the value of changing a behavior is before you spend time and energy on it
  • Commit to what you’re going to change and how with your team or manager


  • Try to seek out your detractors for more information
  • Attempt to change every negative behavior — be discerning about which ones to focus on
  • Instinctively focus on the negative — most reviews contain both good and bad feedback

Case Study #1: Deciding when not to react
When Aimee Fieldston’s small strategy firm was acquired by one of the big consulting companies, she received a much-deserved promotion to partner. About six months into her tenure, she was offered coaching and a 360-degree review as part of a development program for new partners. When she met with the coach before the review, she asked that he interview specific people. Aimee knew she had many fans in the organization but she was more curious to hear from some of her new peers and potential detractors.

The feedback report was primarily positive but included some useful areas of development around building a more commercial approach and developing a stronger team. The review also included some harsh feedback about Aimee as a person, indicating that some of her reviewers thought she had an irritating style. The coach noted that this was something he heard from a very small number of people. Aimee was taken aback as these were criticisms she hadn’t heard before. “It just wasn’t aligned with my sense of who I am,” she said. She was upset but rather than reacting right away, she took the time to reflect on it and consulted a more senior partner who had given her some frank, career-changing advice in the past. He agreed that the feedback didn’t resonate and asked her to think about whether there was any truth in it. If there wasn’t, he advised her to let it go. “Feedback sometimes is a gift that comes with a gift receipt,” he said.

Not responding was hard for Aimee. “I believe in feedback and I believed in this process,” she said. Ultimately, she chose to work on the things in the report that had ringed true for her.

Case Study #2: Listening to your team
In 2004, Torrey Cady, a Battery Commander, was mid-way through a tour in Iraq. In accordance with the Army’s culture of feedback and continual improvement, Torrey decided that the tour midpoint was a good time to take the temperature of his roughly 100-person organization. Torrey had been in service for almost 20 years and had done several Command Climate Surveys (CCS). The CCS, the Army’s version of a 360-degree review, surveys all soldiers in a unit on issues of morale, leadership, and performance.

The results of Torrey’s CCS surprised him. His soldiers indicated that they thought he was unapproachable and was too busy speaking with Iraqi mayors and sheiks to spend enough time with them. This negative feedback was especially difficult for Torrey. “One of the strengths I thought I had, because I came up through the ranks, was being approachable, easy to talk to, and down to earth,” he said.

While his initial reaction was shock and disbelief, when he read the comments, he understood more about what was going on. Every day Torrey and his men went out on patrol so that Torrey could meet with an Iraqi official about rebuilding the country. His team would wait outside, patrolling the area to keep Torrey and themselves safe. When the meeting was done, Torrey would hop back in the Humvee and say, “Ok, let’s get going,” and they’d head back to base. He rushed them back because he wanted to keep his soldiers safe and give them as much time off as possible. The sooner they got back to base, the sooner his team could eat, call their families, etc. But it turned out that they wanted to know what had happened in Torrey’s meetings and why they had to wait in the hot sun all day. “From their perspective, I hadn’t done a good job of explaining what it was I was doing and why,” he said. “I realized that I was so task-oriented and mission-focused that I was ignoring the very people who were helping me achieve the mission.”

After taking in the feedback, Torrey sat the team down and shared what he had heard. He explained that while it had not been intentional, he now knew that his behavior was having a negative impact on them. Starting then, at the end of each patrol, Torrey committed to debriefing his team (not just his supervisor) on the meeting and how it went. He also made a concerted effort to spend more casual time with his soldiers. Three months later, Torrey did another CCS and the difference was drastic. His team clearly appreciated what he had changed and they now felt included in the mission.

Now, it is very much possible to type Indian Rupee Symbol from your computer. This procedure was available immediately after the sign became official, so I am a little late in updating on this. I have shared and uploaded the required font here. The steps to type the symbol are very simple, and are as follows: 

  1. Download the font ‘Rupee Foradian.ttf’ from here and Save on your Desktop.
  2. Go to Control Panel.
  3. Open the Font folder. It is available in the Classic View of Control Panel.
  4. Paste this font into this folder. This will automatically install the required font.
  5. Open Microsoft Word or any other text editor.
  6. Change font to ‘Rupee Fordian’.
  7. Press ` Key (the one above ‘tab’ key).

This now works as the Rupee Symbol, for the selected font.

The key to be pressed to type the Rupee Symbol

This is a new browser that is launched especially for the Indians. I am not sure if it is made by an Indian, but it is very likely that this will be the case. (In fact it is being developed by Hidden Reflex, I just got a comment on this post from their founder, Alok). What I have seen of it so far is that it is possibly made on the Mozilla Firefox. But I must say, it is still very much different and very much unique. I really liked the browser very much, and for Indians it will be special as now we will not have to go to any site to type our content in Hindi language. This browser by default provides a sidebar where one can type the content in Hindi language.

Some of its features are really amazing. First of all, by default, its look, which is simple, easy to install and load, and then its default skin, is black in color. It gives the Blackle look, or I can say it looks to follow the Blackle concept, and I must say it definitely will be very useful, more than that of Blackle. Since Blackle is a site, a search engine while Epic is a browser, it should be able to save a lot of energy hours. Also it seems to be very light weighted, much less than even Google Chrome, which is again very useful.

It is got a toolbar in the left pane, which covers almost all the common shortcuts. It contains icon of the entire common shortcuts used by us ranging from facebook, to twitter, to gmail, to travel sites, to videos, to jobs sites. It has got it all. But one thing that is unique here (apart from the Hindi language typing I already have written about) is that it has a text writer. This is a very effective writer as one can write the content is a normal text format as well as in html format. It provides default tags that are used in HTML. So one just needs to select them from the drop down lists, and make use of them. There is no problem of typing the tags anymore. Also these files can be saved in the text as well as HTML format, which helps to directly save a file in html format. First of all no browser provides such a good application (at least I have not come across any). Also one can type the content in the writer in as many as 17 languages, which includes 11 Indian languages, a functionality that no other text writer provides (again, at least I have not come across any). The Epic Writer provides all the features and functionality that a user would require to create a basic HTML file. This writer opens as a new tab in the browser, increasing the functionality of the browser, though most of the options from that left toolbar open as sidebar which is again very helpful as that means one can do multitasking for example chatting and also working or doing other things at the same time. So this helps in reducing the effort, now one does not need to open two instances of a browser.

In the other icons it has got a India icon, which displays the current Indian news from various sources. It has got a videos icon, which will display all the bookmarked videos in YouTube. Then it has got a To Do, My computer, and Timer icons which will be helpful in managing the personal schedules and other data. Then it has got different icons for various networking sites such as facebook, twitter, orkut, Gmail, and yahoo. Then it has also got an icon for Google maps. Then it has icons for job sites, travel sites, and game sites, which will display various travel sites links, various job sites links, and various game sites links respectively. Then as other browsers it has got icons for bookmarks, history, downloads, add-ons and also for Epic applications. And as I had written earlier all theses features open in a sidebar, thus enabling the user to perform the tasks more efficiently. Though multitasking is not really recommended, but it will definitely help if a person wants to chats oe other two networking sites open at the same time.

Two features i did not mention above were Backup and Collections. Backup is again very useful, as the browser directly connects to the backup software thus enabling a much more efficient way of handling data. Similarly, My Computer icon helps, connect to the system thus making the task of uploading files or saving files a lot more easier and a lot more convenient. The Collections icon displays the data from the system again. All these functionalities use the system. The “Collections” again helps in a more easy maneuverability of the most frequently used data. Also in addition to these it provides an antivirus icon, which will take the data again from the system, thereby providing better protection. Also it provides direct and easy to scan options. This makes a work so easy as one does not need to open his antivirus software again and agian to scan files. These functioanlities are simple but special because not many browsers provide these (again, at least I have not come across any). 

These are the many features that Epic provides, apart from this it is very easy to use. I have been using it for three days now, I can say it has been fun playing with this browser. Especially for that fact that this browser has almost everything a user needs in a normal surfing time.

This browser can be downloaded from this site. It is very easy to download and install the file. I do recommend using this, especially to those who write a lot of HTML content or even to the bloggers.

Also do write to me here if you found this information interesting, and also found the Epic browser easy and good to use. Though I have just written about the facts that I have used in the last few days. Maybe I am wrong at sometime, or maybe technically incorrect. So do write in to me with your suggestions. Also if you have some additional information about Epic, I will add it here on the blog.

Blackle is a website powered by Google Custom Search, which aims to save energy by displaying a black background and using grayish-white font color for search results. It is also a search engine and uses the same database used by Google so there is no difference in the results displayed by the two sites. I just came across this site recently, so I thought of it being a new concept of Google. But no I was completely wrong. This was created by Toby Heap in January 2007 (more than three years back) and is owned by Heap Media, Australia. It is only powered by the Google Custom Search. Also it is available in as many as six languages: English, Portuguese, French, Czech, Italian, and Dutch. As it is not as commonly known as Google I thought of writing something about this. But that is not the real case, it too is pretty famous and is ranked in the top 6000 sites by Alexa. 

The basic concept behind Blackle is that the computer monitors can be made to consume less energy by displaying much darker colors. Blackle is based on a study which tested a variety of CRT and LCD monitors. Although there is dispute over whether there really are any energy saving effects. This concept was first brought to the attention of Heap Media by a blog post, which estimated that Google could save 750 megawatt hours a year by utilizing it for CRT screens. The homepage of Blackle also provides a count of the number of watt hours that have been saved by enabling this concept.

During my final year of engineering, I had to make a project, the final-year project. I planned to make it in JAVA. I was using a database, a text file, an xml file, and an html file. While making the project I faced a lot of problems in which parser to use, the DOM or the SAX. In fact while submitting my first report to my guide (that is after planning out my project), she asked me what exactly is the difference between the parsers, and which one will be more useful to use. I could not actually answer it at that time (I had thought of deciding on this part at the end). So I got down to study the difference between the two, and I found this very good article at that time. It helped me a lot, and in fact I ended up using both these parsers in my project for different reasons. So now I thought that I should put up this article here on my blog. Though this is a very old one, but it still is very helpful, as it clearly gives you the difference between the two pasers.

Why they were both built
SAX (Simple API for XML) and DOM (Document Object Model) were both designed to allow programmers to access their information without having to write a parser in their programming language of choice. By keeping the information in XML 1.0 format, and by using either SAX or DOM APIs your program is free to use whatever parser it wishes. This can happen because parser writers must implement the SAX and DOM APIs using their favorite programming language. SAX and DOM APIs are both available for multiple languages (Java, C++, Perl, Python, etc.).

So both SAX and DOM were created to serve the same purpose, which is giving you access to the information stored in XML documents using any programming language (and a parser for that language). However, both of them take very different approaches to giving you access to your information.

What is DOM?
DOM gives you access to the information stored in your XML document as a hierarchical object model. DOM creates a tree of nodes (based on the structure and information in your XML document) and you can access your information by interacting with this tree of nodes.The textual information in your XML document gets turned into a bunch of tree nodes. 

Regardless of the kind of information in your XML document (whether it is tabular data, or a list of items, or just a document), DOM creates a tree of nodes when you create a Document object given the XML document. Thus DOM forces you to use a tree model (just like a Swing TreeModel) to access the information in your XML document. This works out really well because XML is hierarchical in nature. This is why DOM can put all your information in a tree (even if the information is actually tabular or a simple list).

The image here is overly simplistic, because in DOM, each element node actually contains a list of other nodes as its children. These children nodes might contain text values or they might be other element nodes. At first glance, it might seem unnecessary to access the value of an element node (e.g.: in “<name> John </name>”, John will be the value) by looking through a list of children nodes inside of it. If each element only had one value then this would truly be unnecessary. However, elements may contain text data and other elements; this is why you have to do extra work in DOM just to get the value of an element node. Usually when pure data is contained in your XML document, it might be appropriate to “lump” all your data in one String and have DOM return that String as the value of a given element node. This does not work so well if the data stored in your XML document is a document (like a Word or Framemaker document). In documents, the sequence of elements is very important. For pure data (like a database table) the sequence of elements does not matter. So DOM preserves the sequence of the elements that it reads from XML documents, because it treats everything as it if were a document. Hence the name DOCUMENT object model.

If you plan to use DOM as the Java object model for the information stored in your XML document then you really don’t need to worry about SAX. However, if you find that DOM is not a good object model to use for the information stored in your XML document then you might want to take a look at SAX. It is very natural to use SAX in cases where you have to create your own CUSTOM object models.

What is SAX?
SAX chooses to give you access to the information in your XML document, not as a tree of nodes, but as a sequence of events! This is very useful as the SAX chooses not to create a default Java object model on top of your XML document (like DOM does). This makes SAX faster, and also necessitates the following things:

  • creation of your own custom object model
  • creation of a class that listens to SAX events and properly creates your object model.

These steps are not necessary with DOM, because DOM already creates an object model for you (which represents your information as a tree of nodes).

In the case of DOM, the parser does almost everything, read the XML document in, create a Java object model on top of it and then give you a reference to this object model (a Document object) so that you can manipulate it. SAX is not called the Simple API for XML for nothing, it is really simple. SAX doesn’t expect the parser to do much, all SAX requires is that the parser should read in the XML document, and fire a bunch of events depending on what tags it encounters in the XML document. You are responsible for interpreting these events by writing an XML document handler class, which is responsible for making sense of all the tag events and creating objects in your own object model. So you have to write:

  • your custom object model to “hold” all the information in your XML document into
  • a document handler that listens to SAX events (which are generated by the SAX parser as its reading your XML document) and makes sense of these events to create objects in your custom object model.

SAX can be really fast at runtime if your object model is simple. In this case, it is faster than DOM, because it bypasses the creation of a tree based object model of your information. On the other hand, you do have to write a SAX document handler to interpret all the SAX events (which can be a lot of work).

These events fired by the SAX parser are really very simple. SAX will fire an event for every open tag, and every close tag. It also fires events for #PCDATA and CDATA sections. You document handler (which is a listener for these events) has to interpret these events in some meaningful way and create your custom object model based on them. Your document handler will have to interpret these events and the sequence in which these events are fired is very important. SAX also fires events for processing instructions, DTDs, comments, etc. But the idea is still the same, your handler has to interpret these events (and the sequence of the events) and make sense out of them.

When to use DOM
If your XML documents contain document data (e.g., Framemaker documents stored in XML format), then DOM is a completely natural fit for your solution. If you are creating some sort of document information management system, then you will probably have to deal with a lot of document data. An example of this is the Datachannel RIO product, which can index and organize information that comes from all kinds of document sources (like Word and Excel files). In this case, DOM is well suited to allow programs access to information stored in these documents.

However, if you are dealing mostly with structured data (the equivalent of serialized Java objects in XML) DOM is not the best choice. That is when SAX might be a better fit.

When to use SAX
If the information stored in your XML documents is machine readable (and generated) data then SAX is the right API for giving your programs access to this information. Machine readable and generated data include things like:

  • Java object properties stored in XML format
  • queries that are formulated using some kind of text based query language (SQL, XQL, OQL)
  • result sets that are generated based on queries (this might include data in relational database tables encoded into XML).

So machine generated data is information that you normally have to create data structures and classes for in Java. A simple example is the address book which contains information about persons. This address book XML file is not like a word processor document, rather it is a document that contains pure data, which has been encoded into text using XML.

When your data is of this kind, you have to create your own data structures and classes (object models) anyway in order to manage, manipulate and persist this data. SAX allows you to quickly create a handler class which can create instances of your object models based on the data stored in your XML documents. An example is a SAX document handler that reads an XML document that contains my address book and creates an AddressBook class that can be used to access this information. The first SAX tutorial shows you how to do this. The address book XML document contains person elements, which contain name and email elements. My AddressBook object model contains the following classes:

  • AddressBook class, which is a container for Person objects
  • Person class, which is a container for name and email String objects.

So my “SAX address book document handler” is responsible for turning person elements into Person objects, and then storing them all in an AddressBook object. This document handler turns the name and email elements into String objects.

The SAX document handler you write does element to object mapping. If your information is structured in a way that makes it easy to create this mapping you should use the SAX API. On the other hand, if your data is much better represented as a tree then you should use DOM.


Posted: May 26, 2010 by Shishir Gupta in Computer Articles, Technologies
Tags: ,

I have been using CAPTCHA for sometime now. About a week back when I was trying to implement it in I faced a few problems as I could not display the image. After a lot of troubleshooting I could get it done. Then one of my friends also happened to get the same problem, after that I searched over the net and realised that this is a very common issue that many developers are facing. So I thought of putting up the procedure here on my blog. But before I do this I thought of telling everyone about CAPTCHA. So I found this very informative article over wikipedia and I am putting it up here.


A CAPTCHA or Captcha is a type of challenge-response test used in computing to ensure that the response is not generated by a computer. The process usually involves one computer (a server) asking a user to complete a simple test which the computer is able to generate and grade. Because other computers are unable to solve the CAPTCHA, any user entering a correct solution is presumed to be human. Thus, it is sometimes described as a reverse Turing test, because it is administered by a machine and targeted to a human, in contrast to the standard Turing test that is typically administered by a human and targeted to a machine. A common type of CAPTCHA requires that the user type letters or digits from a distorted image that appears on the screen.

The term “CAPTCHA” (based upon the word capture) was coined in 2000 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford (all of Carnegie Mellon University). It is a contrived acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart.” Carnegie Mellon University attempted to trademark the term, but the trademark application was abandoned on 21 April 2008.

A CAPTCHA is a means of automatically generating new challenges which:

  • Current software is unable to solve accurately.
  • Most humans can solve
  • Does not rely on the type of CAPTCHA being new to the attacker.

Although a checkbox “check here if you are not a bot” might serve to distinguish between humans and computers, it is not a CAPTCHA because it relies on the fact that an attacker has not spent effort to break that specific form. (Such ‘check here’ methods are very easy to defeat.) Instead, CAPTCHAs rely on difficult problems in artificial intelligence. In the short term, this has the benefit of distinguishing humans from computers. In the long term, it creates an incentive to advance the state of Artificial Intelligence, which the originators of the term view as a benefit in its own right.

CAPTCHAs are used to prevent automated software from performing actions which degrade the quality of service of a given system, whether due to abuse or resource expenditure. CAPTCHAs can be deployed to protect systems vulnerable to e-mail spam, such as the webmail services of Gmail, Hotmail, and Yahoo! Mail.

CAPTCHAs found active use in stopping automated posting to blogs, forums and wikis, whether as a result of commercial promotion, or harassment and vandalism. CAPTCHAs also serve an important function in rate limiting, as automated usage of a service might be desirable until such usage is done in excess, and to the detriment of human users. In such a case, a CAPTCHA can enforce automated usage policies as set by the administrator when certain usage metrics exceed a given threshold. The article rating systems used by many news web sites are another example of an online facility vulnerable to manipulation by automated software.

Because CAPTCHAs rely on visual perception, users unable to view a CAPTCHA (for example, due to a disability or because it is difficult to read) will be unable to perform the task protected by a CAPTCHA. Therefore, sites implementing CAPTCHAs may provide an audio version of the CAPTCHA in addition to the visual method. The official CAPTCHA site recommends providing an audio CAPTCHA for accessibility reasons. This combination represents the most accessible CAPTCHA currently known to exist, but it is far from universally adopted, with most websites offering only the visual CAPTCHA, with or without providing the option of generating a new image if one is too difficult to read.

Attempts at more accessible CAPTCHAs
Even an audio and visual CAPTCHA will require manual intervention for some users, such as those who have visual disabilities and also are deaf. There have been various attempts at creating CAPTCHAs that are more accessible. Attempts include the use of JavaScript, mathematical questions (“what is 1+1”), or “common sense” questions (“what color is the sky on a clear day”). However they do not meet both the criteria of being able to be automatically generated and not relying on the type of CAPTCHA being new to the attacker.

There are a few approaches to defeating CAPTCHAs:

  • exploiting bugs in the implementation that allow the attacker to completely bypass the CAPTCHA,
  • improving character recognition software, or
  • using cheap human labor to process the tests.

Insecure implementation
Like any security system, design flaws in a system implementation can prevent the theoretical security from being realized. Many CAPTCHA implementations, especially those which have not been designed and reviewed by experts in the fields of security, are prone to common attacks.

Some CAPTCHA protection systems can be bypassed without using Optical Character Recognition (OCR) simply by re-using the session ID of a known CAPTCHA image. A correctly designed CAPTCHA does not allow multiple solution attempts at one CAPTCHA. This prevents the reuse of a correct CAPTCHA solution or making a second guess after an incorrect OCR attempt. Other CAPTCHA implementations use a hash (such as an MD5 hash) of the solution as a key passed to the client to validate the CAPTCHA. Often the CAPTCHA is of small enough size that this hash could be cracked. Further, the hash could assist an OCR based attempt. A more secure scheme would use an HMAC. Finally, some implementations use only a small fixed pool of CAPTCHA images. Eventually, when enough CAPTCHA image solutions have been collected by an attacker over a period of time, the CAPTCHA can be broken by simply looking up solutions in a table, based on a hash of the challenge image.

Computer character recognition
A number of research projects have attempted (often with success) to beat visual CAPTCHAs by creating programs that contain the following functionality:

  1. Pre-processing: Removal of background clutter and noise.
  2. Segmentation: Splitting the image into regions which each contain a single character.
  3. Classification: Identifying the character in each region.

These steps are easy tasks for computers. The only step where humans still outperform computers is segmentation. If the background clutter consists of shapes similar to letter shapes, and the letters are connected by this clutter, the segmentation becomes nearly impossible with current software. Hence, an effective CAPTCHA should focus on the segmentation.

Several research projects have broken real world CAPTCHAs, including one of Yahoo’s early CAPTCHAs called “EZ-Gimpy” and the CAPTCHA used by popular sites such as PayPal, LiveJournal, phpBB, and other open source solutions. In January 2008 Network Security Research released their program for automated Yahoo! CAPTCHA recognition. Windows Live Hotmail and Gmail, the other two major free email providers, were cracked shortly after.

In February 2008 it was reported that spammers had achieved a success rate of 30% to 35%, using a bot, in responding to CAPTCHAs for Microsoft’s Live Mail service and a success rate of 20% against Google’s Gmail CAPTCHA. A Newcastle University research team has defeated the segmentation part of Microsoft’s CAPTCHA with a 90% success rate, and claim that this could lead to a complete crack with a greater than 60% rate.

Human solvers
CAPTCHA is vulnerable to a relay attack that uses humans to solve the puzzles. One approach involves relaying the puzzles to a group of human operators who can solve CAPTCHAs. In this scheme, a computer fills out a form and when it reaches a CAPTCHA, it gives the CAPTCHA to the human operator to solve.

Spammers pay about $0.80 to $1.20 for each 1,000 solved captchas to companies employing human solvers in India, Bangladesh, and China.

Another approach involves copying the CAPTCHA images and using them as CAPTCHAs for a high-traffic site owned by the attacker. With enough traffic, the attacker can get a solution to the CAPTCHA puzzle in time to relay it back to the target site. In October 2007, a piece of malware appeared in the wild which enticed users to solve CAPTCHAs in order to see progressively further into a series of striptease images. A more recent view is that this is unlikely to work due to unavailability of high-traffic sites and competition by similar sites.

These methods have been used by spammers to set up thousands of accounts on free email services such as Gmail and Yahoo!. Since Gmail and Yahoo! are unlikely to be blacklisted by anti-spam systems, spam sent through these compromised accounts is less likely to be blocked.

The circumvention of CAPTCHAs may violate the anti-circumvention clause of the Digital Millennium Copyright Act (DMCA) in the United States. In 2007, Ticketmaster sued software maker RMG Technologies for its product which circumvented the ticket seller’s CAPTCHAs on the basis that it violated the anti-circumvention clause of the DMCA. In October 2007, an injunction was issued stating that Ticketmaster would likely succeed in making its case. In June 2008, Ticketmaster filed for Default Judgment against RMG. The Court granted Ticketmaster the Default and entered an $18.2M judgment in favor of Ticketmaster.

Some researchers promote image recognition CAPTCHAs as a possible alternative for text-based CAPTCHAs. To date, only RapidShare, Linux Mint and Ubuntu have made use of an image based CAPTCHA. Many amateur users of the phpBB forum software (which has suffered greatly from spam) have implemented an open source image recognition CAPTCHA system in the form of an addon called KittenAuth which in its default form presents a question requiring the user to select a stated type of animal from an array of thumbnail images of assorted animals. The images (and the challenge questions) can be customized, for example to present questions and images which would be easily answered by the forum’s target userbase. Furthermore, for a time, RapidShare free users had to get past a CAPTCHA where you had to only enter letters attached to a cat, while others were attached to dogs. This was later removed because (legitimate) users had trouble entering the correct letters.

Image recognition CAPTCHAs face many potential problems which have not been fully studied. It is difficult for a small site to acquire a large dictionary of images which an attacker does not have access to and without a means of automatically acquiring new labelled images, an image based challenge does not meet the definition of a CAPTCHA. KittenAuth, by default, only had 42 images in its database. Microsoft’s “Asirra,” which it is providing as a free web service, attempts to address this by means of Microsoft Research’s partnership with, which has provided it with more than three million images of cats and dogs, classified by people at thousands of US animal shelters. Researchers claim to have written a program that can break the Microsoft Asirra CAPTCHA.

Human solvers are a potential weakness for strategies such as Asirra. If the database of cat and dog photos can be downloaded, then paying workers $0.01 to classify each photo as either a dog or a cat means that almost the entire database of photos can be deciphered for $30,000. Photos that are subsequently added to the Asirra database are then a relatively small data set that can be classified as they first appear. Causing minor changes to images each time they appear will not prevent a computer from recognizing a repeated image as there are robust image comparator functions (e.g., image hashes, color histograms) that are insensitive to many simple image distortions. Warping an image sufficiently to fool a computer will likely also be troublesome to a human.

Researchers at Google used image orientation and collaborative filtering as a CAPTCHA. Generally speaking, people know what “up” is but computers have a difficult time for a broad range of images. Images were pre-screened to be determined to be difficult to detect up (e.g. no skies, no faces, no text). Images were also collaboratively filtered by showing a “candidate” image along with good images for the person to rotate. If there was a large variance in answers for the candidate image, it was deemed too hard for people as well and discarded. Currently, CAPTCHA creators recommend use of reCAPTCHA as the official implementation. In September 2009, Google acquired reCAPTCHA to aid their book digitization efforts.


I just hope that this information will be very useful for everyone who use CAPTCHA. Also I will be writing about implementing CAPTCHA in very shortly.


Google comes out with another of it’s new gadget, this time a TV, called as Google TV. What I would just say is watch out for Google TV. It is said to be a combination of TV, Internet and Search – endless possibilities. Here is the article of it’s launch from the official blog site. Also I really liked the heading of the article “TV meets web. Web meets TV.”

Announcing Google TV: TV meets web. Web meets TV.
If there’s one entertainment device that people know and love, it’s the television. In fact, 4 billion people across the world watch TV and the average American spends five hours per day in front of one*. Recently, however, an increasing amount of our entertainment experience is coming from our phones and computers. One reason is that these devices have something that the TV lacks: the web. With the web, finding and accessing interesting content is fast and often as easy as a search. But the web still lacks many of the great features and the high-quality viewing experience that the TV offers.

So that got us thinking…what if we helped people experience the best of TV and the best of the web in one seamless experience? Imagine turning on the TV and getting all the channels and shows you normally watch and all of the websites you browse all day — including your favorite video, music and photo sites. We’re excited to announce that we’ve done just that.
Google TV is a new experience for television that combines the TV that you already know with the freedom and power of the Internet. With Google Chrome built in, you can access all of your favorite websites and easily move between television and the web. This opens up your TV from a few hundred channels to millions of channels of entertainment across TV and the web. Your television is also no longer confined to showing just video. With the entire Internet in your living room, your TV becomes more than a TV — it can be a photo slideshow viewer, a gaming console, a music player and much more.

Google TV uses search to give you an easy and fast way to navigate to television channels, websites, apps, shows and movies. For example, already know the channel or program you want to watch? Just type in the name and you’re there. Want to check out that funny YouTube video on your 48” flat screen? It’s just a quick search away. If you know what you want to watch, but you’re not sure where to find it, just type in what you’re looking for and Google TV will help you find it on the web or on one of your many TV channels. If you’d rather browse than search, you can use your standard program guide, your DVR or the Google TV home screen, which provides quick access to all of your favorite entertainment so you’re always within reach of the content you love most.

Because Google TV is built on open platforms like Android and Google Chrome, these features are just a fraction of what Google TV can do. In our announcement today at Google I/O, we challenged web developers to start coming up with the next great web and Android apps designed specifically for the TV experience. Developers can start optimizing their websites for Google TV today. Soon after launch, we’ll release the Google TV SDK and web APIs for TV so that developers can build even richer applications and distribute them through Android Market. We’ve already started building strategic alliances with a number of companies — like and Rovi — at the leading edge of innovation in TV technology. is a next-generation TV application working to provide semantic search, personalized recommendation and social features for Google TV across all sources of premium content available to the user. Rovi is one of the world’s leading guide applications. We’re looking forward to seeing all of the ways developers will use this new platform.

We’re working together with Sony and Logitech to put Google TV inside of televisions, Blu-ray players and companion boxes. These devices will go on sale this fall, and will be available at Best Buy stores nationwide. You can sign up here to get updates on Google TV availability.

This is an incredibly exciting time – for TV watchers, for developers and for the entire TV ecosystem. By giving people the power to experience what they love on TV and on the web on a single screen, Google TV turns the living room into a new platform for innovation. We’re excited about what’s coming. We hope you are too.”

Well reading to this, I feel that this is not going to be any far from are regular TV.

After three years Apple is back to what it is known to be best at — launch new product. The computing giant launched iPad, a sleek table that aims to revolutionize the publishing business the same way as Apple iPod transformed the music industry and iPhone transformed the telecom industry. But there are many misses in this, with all the great features it has got. The critics are lamenting the absence of some features which they feel are basic for any product of iPad’s category. So what is it that is missing from Apple’s big launch? Read on to find out. I just found out 9 misses at Indiatimes Infotech. I am just listing them out here.

Lack of camera
One big miss in Apple iPad is camera. Lack of camera seems disappointing especially since the device is said to be somewhere between a smart phone and laptop. And in today’s smart phones presence of camera seems almost default.
Lack of camera means there is no option to video chat or even do video-conferencing.

No SD card support
With iPad coming in 16, 32 and 64 GB versions, storage seems big. But still extra storage is always welcome. And with no SD card support that option is not available.

No Adobe Flash support
Another big miss is absence of Flash support. With a screen as big as 9.7 inches, Flash support is surely to be missed.
As surfing the Web without Flash is not same. The `big, empty video boxes in the middle of a page’ that will appear are sure to disappoint. People won’t be able to access miniclip, play farmville, watch ESPN or Hulu.
In iPhone 3GS too when users browse through Web pages with Adobe Flash, it displays empty spaces with missing icons.

Lack of multitasking with applications
Multitasking seems to be becoming a computing norm, with even mobile OSes offering it. However, there is no multitasking option in iPad’s OS. This means for example users can’t listen to FM while they surf the Web, or switch back and forth between Facebook at Twitter, or write an email while talking on a VOIP call.

Widescreen/Aspect Ratio
Analysts feel that a 4:3 aspect ratio may be just perfect for using iPhone apps in full screen mode. However, a similar aspect ratio may not be as good for media. Especially since the digital world is rapidly moving to widescreen formats.

Digital ink and paper
Being touted as a ‘Kindle killer’ by many, Apple iPad is not exactly that. For, Apple iPad lacks the advantages of a digital ink and paper which are considered to be integral parts of an e-reader.

There is no facility for HDMI output in iPad. This means users will not be able to view HD videos on a large TV screen even if they have downloaded the same from iTunes.

Need of Adapters
The design may seem just seamless. But, you may find it too seamless when you will figure out the number of adapters you would require with it. Apple iPad has no USB ports reportedly. So, whatever you want to use with it camera, printer or even a USB, you would need an adapter for it.

Can’t ‘download’ Apps
If you are among those who loves trying new applications, iPad is surely not for you. As in ipad you can download applications only from Apple’s AppStore. iPad doesn’t allow users to download any apps other than from the App Store!

Whether to use a file-system or a database to store the data of your application has been a contentious issue since a long time now. Now Pune-based startup Druvaa has weighed in on this issue on Druvaa’s blog. Their post is republished here.

It’s interesting to see how databases have come a long way and have clearly out-shadowed file-systems for storing structured or unstructured information.

Technically, both of them support the basic features necessary for data access. For example both of them ensure  –

  • Data is managed to ensure its integrity and quality.
  • Allow shared access by a community of users.
  • Use of well defined schema for data-access.
  • Support a query language.

But, file-systems seriously lack some of the critical features necessary for managing data. Lets take a look at some of these feature.

Transaction support
Atomic transactions guarantee complete failure or success of an operation. This is especially needed when there is concurrent access to same data-set. This is one of the basic features provided by all databases.

But, most file-systems don’t have this features. Only the lesser known file-systems – Transactional NTFS(TxF), Sun ZFS, Veritas VxFS support this feature. Most of the popular opensource file-systems (including ext3, xfs, reiserfs) are not even POSIX compliant.

Fast Indexing
Databases allow indexing based on any attribute or data-property (i.e. SQL columns). This helps fast retrieval of data, based on the indexed attribute. This functionality is not offered by most file-systems i.e. you can’t quickly access “all files created after 2PM today”.

The desktop search tools like Google desktop or MAC spotlight offer this functionality. But for this, they have to scan and index the complete file-system and store the information in a internal relational-database.

Snapshot is a point-in-time copy/view of the data. Snapshots are needed for backup applications, which need consistent point-in-time copies of data.

The transactional and journaling capabilities enable most of the databases to offer snapshots without shopping access to the data. Most file-systems however, don’t provide this feature (ZFS and VxFS being only exceptions). The backup softwares have to either depend on running application or underlying storage for snapshots.

Advanced databases like Oracle (and now MySQL) also offer clustering capabilities. The “g” in “Oracle 11g” actually stands for “grid” or clustering capability. MySQL offers shared-nothing clusters using synchronous replication. This helps the databases scale up and support larger & more-fault tolerant production environments.

File systems still don’t support this option.  The only exceptions are Veritas CFS and GFS (Open Source).

Replication is commodity with databases and form the basis for disaster-recovery plans. File-systems still have to evolve to handle it.

Relational View of Data
File systems store files and other objects only as a stream of bytes, and have little or no information about the data stored in the files. Such file systems also provide only a single way of organizing the files, namely via directories and file names. The associated attributes are also limited in number e.g. – type, size, author, creation time etc. This does not help in managing related data, as disparate items do not have any relationships defined.

Databases on the other hand offer easy means to relate stored data. It also offers a flexible query language (SQL) to retrieve the data. For example, it is possible to query a database for “contacts of all persons who live in Acapulco and sent emails yesterday”, but impossible in case of a file system.

File-systems need to evolve and provide capabilities to relate different data-sets. This will help the application writers to make use of native file-system capabilities to relate data. A good effort in this direction was Microsoft WinFS.

The only disadvantage with using the databases as primary storage option, seems to be the additional cost associated. But, I see no reason why file-systems in future will borrow features from databases.

Druvaa inSync uses a proprietary file-system to store and index the backed up data. The meta-data for the file-system is stored in an embedded PostgreSQL database. The database driven model was chosen to store additional identifiers withe each block – size, hash and time. This helps the filesystem to –

  • Divide files into variable sized blocks.
  • Data deduplication – Store single copy of duplicate blocks.
  • Temporal File-system – Store time information with each block. This enables faster time-based restores.

The original is post is at:

Druvaa is a Pune-based startup that sells fast, efficient, and cheap backup software for enterprises and SMEs. It makes heavy use of data de-duplication technology to deliver on the promise of speed and low-bandwidth consumption. Let see what exactly data de-duplication is and how it works.

Definition of Data De-duplication

Data deduplication or Single Instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required.


A typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored, each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB.

Technological Classification

The practical benefits of this technology depend upon various factors like –

  • Point of Application – Source Vs Target
  • Time of Application – Inline vs Post-Process
  • Granularity – File vs Sub-File level
  • Algorithm – Fixed size blocks Vs Variable length data segments

A simple relation between these factors can be explained using the diagram below –


Deduplication Technological Classification

Deduplication Technological Classification


Target Vs Source based Deduplication

Target based deduplication acts on the target data storage media. In this case the client is unmodified and not aware of any deduplication. The deduplication engine can be embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively it can also be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases it improves only the storage utilization.


Target vs Source Deduplication

Target vs Source Deduplication

On the contrary Source based deduplication acts on the data at the source before it’s moved. A deduplication aware backup agent is installed on the client which backs up only the unique data. The result is improved bandwidth and storage utilization. But this imposes additional computational load on the backup client.

Inline Vs Post-process Deduplication

In target based deduplication, the deduplication engine can either process data for duplicates in real time (i.e. as and when it sends to the target) or after its been stored in the target storage.

The former is called inline deduplication. The advantages are –

  • Increase in overall efficiency as data is only passed and processed once.
  • The processed data is instantaneously available for post storage processes like recovery and replication reducing the RPO and RTO window.

The disadvantages are –

  • Decrease in write throughput.
  • Extent of deduplication is less – Only fixed-length block deduplication approach can be used.

The inline deduplication only processes incoming raw blocks and does not have any knowledge of the files or file-structure. This forces it to use the fixed-length block approach.


Inline vs Post Process Deduplication

Inline vs Post Process Deduplication


The post-process deduplication asynchronously acts on the stored data. And has an exact opposite effect on advantages and disadvantages of the inline deduplication listed above.

File vs Sub-file Level Deduplication

The duplicate removal algorithm can be applied on full file or sub-file levels. The full file level duplicates can be easily eliminated by calculating the single checksum of the complete file data and comparing it against the existing checksums of the backed up files. It’s simple and fast, but the extent of deduplication is very less, as it does not address the problem of duplicate content found inside different files or data-sets (e.g. emails).

The sub-file level deduplication technique breaks the file into smaller fixed or variable size blocks, and then uses standard hash based algorithm to find similar blocks.

Fixed-Length Blocks v/s Variable-Length Data Segments

Fixed-length block approach, as the name suggests, divides the files into fixed size length blocks and uses simple checksum (MD5/SHA etc.) based approach to find duplicates. Although it’s possible to look for repeated blocks, the approach provides very limited effectiveness. The reason is that the primary opportunity for data reduction is in finding duplicate blocks in two transmitted datasets that are made up mostly – but not completely – of the same data segments.


For example, similar data blocks may be present at different offsets in two different datasets. In other words the block boundary of similar data may be different. This is very common when some bytes are inserted in a file, and when the changed file processes again and divides into fixed-length blocks, all blocks appear to have changed. Therefore, two datasets with a small amount of difference are likely to have very few identical fixed length blocks.

Variable-Length Data Segment technology divides the data stream into variable length data segments using a methodology that can find the same block boundaries in different locations and contexts. This allows the boundaries to “float” within the data stream so that changes in one part of the dataset has little or no impact on the boundaries in other locations of the dataset.

ROI Benefits

Each organization has a capacity to generate data. The extent of savings depends upon – but not directly proportional to – the number of applications or end users generating data. Overall the deduplication savings depend upon following parameters –

  1. No. of applications or end users generating data
  2. Total data
  3. Daily change in data
  4. Type of data (emails/ documents/ media etc.)
  5. Backup policy (weekly-full – daily-incremental or daily-full)
  6. Retention period (90 days, 1 year etc.)
  7. Deduplication technology in place

The actual benefits of deduplication are realized once the same dataset is processed multiple times over a span of time for weekly/daily backups. This is especially true for variable length data segment technology which has a much better capability for dealing with arbitrary byte insertions.


While some vendors claim 1:300 ratios of bandwidth/storage saving. The Druvaa customer statistics show that, the results are between 1:4 to 1:50 for source based deduplication.



The original post –

1. Verify that you can add the Web Part properly to a Web Part zone.
Adding a Web Part to a Web Part zone is the most common user task. Therefore, it is essential that the Web Part works correctly to create a good user experience.

To Test

  • Create a new Web Part Page.
  • Click Modify Shared Page, click Add Web Parts and then click Import.
  • Import the .dwp file for your Web Part.
  • Add the Web Part to a Web Part zone.

2. Verify that static Web Parts render appropriately and do not cause the Web Part Page to fail.
Web Parts that are placed outside of a Web Part zone, or static Web Parts, are contained in .aspx pages, but not in the Web Part zone. Because the static Web Part is a Web form control, ASP.NET renders the Web Part. You cannot save changes in either shared or personal view.

To Test

  • Open FrontPage.
  • Create a new blank page.
  • In Design view, on the Data menu, click Insert Web Part.
  • From the Web Part Gallery that appears in the task pane, drag a Web Part onto the page.
  • Save the page as an .aspx page.
  • View the page in the browser.
    Note : Make sure that your Web Part is within the <form runat=”server”> tags.
  • Verify that the part renders correctly (for example, you should not be able to save changes in a static Web Part).

3. Verify that the Web Part works correctly regardless of where the Web Part Page is located.
You can add Web Parts to Web Part Pages that are contained in a document library as well as Web Part Pages that are contained in the top-level Web site. They should work correctly in either location.

To Test

  • Create a Web Part Page in a document library.
  • Browse to the portal site.
  • On the Create menu, click Web Part Page.
  • In the New Web Part Page creation form, Save Location lists the document libraries in which the Web Part can be saved. Select a document library, and then click Create.
  • Import your Web Part from the gallery.
  • Create a Web Part Page in the top-level Web site.
  • Open a SharePoint site in FrontPage.
  • On the File menu, click New.
  • In the New Page section of the task pane, click More page templates, and on the Web Part Pages tab, select a template.
  • Click on a zone to bring up the gallery (or on the Data menu, click Insert Web Part), and then import your Web Part into a zone.
  • Save the Web Part Page in the top-level Web site, for example, at the same location where the default.aspx is located.

4.  Verify that property attributes are correctly defined.
You can specify Web Part properties in two ways: as an XML BLOB contained within the Web Part, or as an attribute within the Web Part.
Because of how the Web Part infrastructure handles property values, i recommend that you define properties as simple types rather than as complex types so that they work properly if specified as attributes of the Web Part.

To Test

  • Create a static Web Part in FrontPage and, in Code view, try setting every property the Web Part has as an attribute.
  • Browse to the page and see if the page fails or if the property was ignored.

5. Verify that Web Part changes made in personal view are not reflected in shared view.
Changes made in shared view are seen by all users. Changes made in personal view should only be seen by the person that made them.

To Test

  • Add the Web Part in shared view.
  • Edit the properties in shared view.
  • Change to personal view.
  • Edit the property in personal view.
  • Change back to shared view, and then make sure the Web Part does not use any of the values changed while in personal view.

6. Verify that every public property can handle bad input.
As for any ASP.NET control or application, you should validate all user input before performing operations with it. This validation can help to protect against not only accidental misuse, but also deliberate attacks such as SQL injection, cross-site scripting, buffer overflow, and so on.

To Test

  • Verify that the Web Part can detect invalid input for properties and that it informs the end user that bad data was entered.
  • Verify that the property is not used outside of its intended purpose. For example, if a Web Part is intended to allow users to link URLs, limit the protocol usage to HTTP instead of allowing any protocol to be saved (for example, javascript://).
  • Verify that the Web Part HTML encodes the property value when rendering user input to the client.
  • Check all the ways in which property values can be changed. For example, the following :
  • Modifying the .dwp file in a text editor.
  • Modifying properties in the tool pane.
  • Modifying properties in Code view in FrontPage.
  • Using the Web Part Page Services Component (WPSC), which is a client-side object model that provides a way to set properties and persist them from the client browser.

7. Verify that the Web Part handles all of its exceptions.
A Web Part should handle all exceptions rather than risk the possibility of causing the Web Part Page to stop responding.

To Test

  • Enter error and boundary cases for Web Part properties to verify that the Web Part never breaks the page by not catching one of its own exceptions.

8. Verify that the Web Part renders correctly in Microsoft Office FrontPage.
If your organization is using FrontPage to customize SharePoint sites, verify that the Web Part renders properly within FrontPage. To accomplish this, the Web Part developer must implement the IDesignTimeHtmlProvider interface.

To Test

  • Open up a Web Part Page that contains the Web Part in Design view in FrontPage. Verify that the Web Part renders correctly and you do not see the message “There is no preview available for this part.”

9. Verify that Web Part properties displayed in the tool pane are user-friendly.
Because the tool pane is where users modify Web Part properties, it is important that users can work with Web Part properties easily in it.

To Test

  • Add the Web Part to a Web Part Page. Click Modify Shared Page, click Modify Shared Web Parts, and then select your Web Part. The tool pane should appear and display the Web Part properties.
  • Verify that the Friendly Name is easy to understand, for example, a property named MyText should be My Text (notice the space between the two words).
  • Make sure the Description (the ToolTip that appears) helps the user understand how and why to set the property.
  • Verify that the Category name makes sense. (Miscellaneous is used when no category is specified for the property, but it is not especially helpful to the user.)
  • Verify that the order of the properties makes sense.
  • If appropriate, check that these properties are localized using the following method : After installing the SharePoint language packs, create a new subsite and select a different language. Add the Web Part to a Web Part Page in the new subsite and verify that Friendly Name, Description, and Category are localized in the tool pane.

10. Verify that the Web Part appears appropriately in the search results.
Because Web Part galleries can contain numerous custom Web Parts, the search capability helps users quickly find the Web Parts they want.
The Web Part infrastructure uses the Title and Description properties of the Web Part to build the result set, so comprehensive information in these fields results in easily searchable Web Parts.

To Test

  • Add the Web Part to the Site Web Part Gallery by navigating to Site Settings, clicking Go to Site Administration, clicking Manage Web Part Gallery, and then clicking New Web Part.
  • Choose the Web Part you’re testing, and then click Populate Gallery.
  • Browse to a Web Part Page, and then click Modify Shared Page, click Add Web Parts, and then click Search. Enter the appropriate search text and click Go. The Web Part should appear as one of the top choices.

11. Verify that you can import and export the Web Part properly.
By default, whenever you export a Web Part, each Web Part property is included in the .dwp file. However, because properties can contain sensitive information, for example, a date of birth, you can identify a property as controlled, allowing you or the user to exclude the value if the Web Part is exported. Only properties that are exported when the user is in personal view can be controlled, in shared view all property values are exported.

To Test

  • Add a Web Part from the gallery into a Web Part Page and set the Web Part’s properties.
  • On the Web Part chrome drop-down menu, click Export to export the Web Part.
  • Save the .dwp generated onto your local computer, and re-import the Web Part by clicking Modify Shared Page, clicking Add Web Parts, and then clicking Import.
  • Browse to the .dwp file, click Upload, and then click Import.
  • Make sure that the properties that were exported are correctly imported.
  • Verify that any property that would not make sense to export (for example, a Social Security number) has the ExportControlledProperties attribute set. (The Allow Export Sensitive Properties check box in the tool pane should be cleared.)

12. Verify that the Web Part previews properly.
It is important to create previews for Web Parts so that administrators are able to review the parts included in the Web Part gallery.

To Test

  • Go to Site Settings, click Go to Site Administration, click Manage Web Part Gallery and then click the Web Part. The preview should render.
  • Verify that there are no script errors.
  • Verify that the preview appears correctly.

13. Verify that the Web Part can access its resources in different setup configurations.
Web Part resources cannot be part of the DLL because they must be URL-accessible. Examples of these resources are images, .js files, or .aspx pages.

To Test

  • Note : Because a Web Part assembly can be installed in either the bin directory (<%SystemDrive%>\Inetpub\wwwroot\bin), or the global assembly cache, you must go through each of these test steps with the Web Part installed in the bin and again with the Web Part installed in the global assembly cache.
  • Add the Web Part to your page in all the following scenarios and make sure that it can correctly access its resources, for example the following :
  • Add the Web Part to the top-level Web site.
  • Add the Web Part to a subsite with unique permissions, in which the user only has rights in the subsite.
  • Add the Web Part to a Web Part Page inside a folder in a document library.
  • Add a Web Part to a site with Self-Service Site Creation enabled on the virtual server.
  • Add a Web Part to a site with Host Header mode enabled.
  • Add the Web part to a site where the top-level Web site is not a SharePoint site, for example, http://servername/customURL.
  • Add to the Web Part to Web Part Pages that are in different subsite languages.

14. Verify that Web Part properties are not dependent on each other.
Because you cannot guarantee the order that properties are set in the tool pane, you should avoid writing Web Part properties that are dependent on each other.

To Test

  • Test different values for properties in the tool pane.
    Note : If a property is not visible in the UI, or is disabled, you can open the Web Part Page in FrontPage, switch to Code view, and then set the properties by editing the XML. Export the Web Part, save the .dwp file and then modify the .dwp file.
  • Import the .dwp file back into the page and check the property values.

15. Verify that Web Parts work correctly with different combinations of Web Part zone settings.
Web Part zones have properties that control whether a user can persist changes. If a user attempts to save changes to a Web Part without the correct permissions, a broken page can result.
The following Web Part zone properties affect Web Parts in the zone:

  • AllowCustomization : If false, and a user is viewing the page in shared view, the Web Part cannot persist any changes to the database.
  • AllowPersonalization : If false, and a user is viewing the page in personal view, the Web Part cannot persist any changes to the database.
  • LockLayout : If true, changes to the AllowRemove, AllowZoneChange, Height, IsIncluded, IsVisible, PartOrder, Width, and ZoneID properties are not persisted to the database regardless of view.

To Test

  • Create a page in the browser, and then add your Web Part into several zones, both in shared and personal views.
  • Open FrontPage. Open a Web Part Page on a SharePoint site and, in Design view, double-click a Web Part zone (or right-click over a Web Part zone, and then on the shortcut menu, click Web Part Zone Properties), and then change the zone properties.  Alternatively, you can switch to Code view and type in the attributes for the Web Part zone control.
  • View the page in the browser.
  • Verify that the part does not break the page and functions correctly.
  • Verify that any UI displayed in the Web Part indicates to the user that changes cannot be persisted or that UI is disabled as appropriate for the zone setting.

16. Verify that the Web Part renders appropriately based on user permissions.
Because a Web Part is managed by the user at run time, the Web Part should render with a user interface that is appropriate for each user’s permissions.

To Test

  • Test with different User accounts that have only Reader or Contributor rights.
  • Make sure the UI is suppressed if the end user does not have permissions to perform a certain action. (For example, if a Web Part displays a Save button, it should be disabled or hidden if the user does not have permissions to perform that action.)
  • Turn on anonymous access for the site and browse a Web Part Page that has your Web Part, but make sure the sign-in button is still visible on the page. (When the sign-in button is displayed on the page, the user has not yet been authenticated.)

17. Verify that adding several instances of the same Web Part to a Web Part Page (or in the same Web Part zone) works correctly.
When you want multiple Web Parts to share client-side script, you should place the script in an external file and register the script for the page to improve performance and simplify maintenance.

To Test

  • Add several instances of the Web Part to the page. Be sure to execute any client-side script that is specific to the Web Part.
  • Add several instances of the Web Part to the same Web Part zone. Be sure to execute any client-side script that is specific to the Web Part.

18. Verify that Web Part caching works correctly.
For any operation that works with a large amount of data, use a Web Part cache to store property values and to expedite data retrieval.
Web Part authors can choose to cache data in several ways, but ultimately the administrator decides the type of caching that a Web Part uses.
Following are the three types of cache:

  • None, which disables caching.
  • Database, which uses the content database (and requires all objects to be serialized).
  • CacheObject, which uses the ASP.NET Cache object (the default).

To Test

  • You set the type of cache using the value of the WebPartCache element in the web.config file.
  • In the web.config file, change the <WebPartCache Storage=”CacheObject”> statement to <WebPartCache Storage=”Database”>, and make sure that the Web Part still works correctly.
  • Change the statement to <WebPartCache Storage=”None”>, and then make sure that the Web Part still works correctly.
    Note : By default, exceptions related to caching are not displayed by the Web Part infrastructure. For debugging purposes only, you can make the following changes to your web.config file.
    In the <system.web> tag, locate the <customErrors mode=”On”> tag and change it to <customErrors mode=”Off”> to see the ASP.NET exception when an error occurs instead of being redirected to the error page.
    In the <SharePoint> tag, locate the <SafeMode MaxControls=”50″ CallStack=”false”/> tag and change it to <SafeMode MaxControls=”50″ CallStack=”true”/>. This causes the ASP.NET error message to display with stack trace information.

19. Verify that requests to other HTTP sites or Web services are asynchronous.
For performance reasons, you should use an asynchronous thread for any operation that works with a large amount of data.

To Test

  • Check with the developer to see if she or he is making any calls to Web services or other HTTP sites. Confirm that the calls are asynchronous.
  • Run some performance tests on a page with the Web Part.

Net Applications caused a bit of a stir this week with a report that showed Microsoft’s operating system share had dipped below 90 percent. This played very well where anti-Microsoft sentiment was strongest, not surprisingly.

Net Applications uses software sensors at 40,000 Web sites around the world to measure traffic and come up with its stats. These stats include operating system, browser, IP address, domain host, language, screen resolution, and a referring search engine, according to Vince Vizzaccaro, executive vice president of marketing and strategic alliances for Net Applications.

However, Net Applications noticed something unusual with stats from, which would represent Google (NASDAQ: GOOG) employees, not the public at large that use its search engine. Two-thirds of the visitors from did not hide what operating system they were running, which Net Applications recorded in its survey.

One-third, however, were unrecognized even though Net Applications’ sensors can detect all major operating systems including most flavors of Unix and Linux. Even Microsoft’s new Windows 7, which is deployed internally at Microsoft headquarters, would show up by its identifier string. But the Google operating systems were specifically blocked.

“We have never seen an OS stripped off the user agent string before.” Vizzaccaro told “I believe you have to arrange to have that happen, it’s not something we’ve seen before with a proxy server. All I can tell you is there’s a good percentage of the people at Google showing up [at Web pages] with their OS hidden.”

A proxy server shouldn’t cause such a block because it would block everything, which Net Applications sees all the time. With the one-third obfuscated Google visitors, it was only the OS that was removed. Their browser, for example, was not hidden. And two-thirds of Google systems surfing the Web identified their OS, mostly Linux.

Internal deployment would make sense, as that’s the best way to test an operating system or anything else under development. Microsoft (NASDAQ: MSFT) has Windows 7 deployed over certain parts of its Redmond campus, using its staff as testers by making them work with it daily. The company refers to this as “eating their own dogfood.”

Google’s secret OS?

So what’s Google hiding? When asked, the company sent a statement that it would not comment on rumor and speculation. But some Silicon Valley watchers think they know the long-rumored software-as-a-service-oriented Google OS.

“I think they could be working on an application infrastructure, because an operating system really connotes the stuff that makes the hardware and software talk to each other, and they are not in that business” said Clay Ryder, president of The Sageza Group. “But as an infrastructure for building network apps, I would think Google would be working on something like that” he continued. “They’ve been rolling out more and more freebie apps and I would think they would eventually want to make some money the old fashioned way. It would make a lot of sense that they would want to have a network app infrastructure that they could roll out most anywhere.”

Such an OS would be an expanded version of the Android OS the company recently released for mobile phones, said Rob Enderle, principal analyst for The Enderle Group. “They were clear they were going to go down this direction, with a platform that largely lives off the cloud with Google apps” he told” Look at it as the Android concept expanded to a PC.”

Both felt Google would not take on Microsoft on the operating system level, because its goal was to make that level irrelevant. “I would never expect Google to get into a desktop OS space” said Ryder. “That just doesn’t make sense. But for a network application infrastructure that is not dependent on the hardware but just the usage of a client, that would make more sense.”

Enderle noted this would be the final piece after Google Apps, the Chrome browser and the Toolbar, which combined are the total user experience, all provided by Google. An underlying infrastructure similar to Android to run it all would be the logical conclusion. “If you think about it, if you live off Google tools, the company that provides the experience into everything else would be Google, not Microsoft” he said. “It’s an interesting strategy and I think it could work, but it would be premature to bring that to market because Chrome is not ready.”

is an iterative incremental process of software development commonly used with agile software development. Despite the fact that “Scrum” is not an acronym, some companies implementing the process have been known to adhere to an all capital letter expression of the word, i.e. SCRUM. This may be due to one of Ken Schwaber’s early papers capitalizing SCRUM in the title.
Although Scrum was intended to be for management of software development projects, it can be used in running software maintenance teams, or as a program management approach.

In 1986, Hirotaka Takeuchi and Ikujiro Nonaka described a new holistic approach which increases speed and flexibility in commercial new product development. They compared this new holistic approach, in which the phases strongly overlap and the whole process is performed by one cross-functional team across the different phases. The case studies come from the automotive, photo machine, computer and printer industries.
In 1991, DeGrace and Stahl, in Wicked Problems, Righteous Solutions referred to this approach as Scrum, a rugby term which was first said by Takeuchi and Nonaka in their article. In the early 1990s, Ken Schwaber used an approach that led to Scrum at his company, Advanced Development Methods. At the same time, Jeff Sutherland developed a similar approach at Easel Corporation and was the first to call it Scrum. In 1995 Sutherland and Schwaber jointly presented a paper describing Scrum at OOPSLA ’95 in Austin, its first public appearance. Schwaber and Sutherland collaborated during the following years to merge the above writings, their experiences, and industry best practices into what is now known as Scrum. In 2001 Schwaber teamed up with Mike Beedle to write up the method in the book “Agile Software Development with SCRUM”.

Scrum is a process skeleton that includes a set of practices and predefined roles. The main roles in Scrum are the ScrumMaster who maintains the processes and works similarly to a project manager, the Product Owner who represents the stakeholders, and the Team which includes the developers.
During each sprint, a 15-30 day period (length decided by the team), the team creates an increment of potentially shippable (usable) software. The set of features that go into each sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting the Product Owner informs the team of the items in the product backlog that he wants completed. The team then determines how much of this they can commit to complete during the next sprint. During the sprint, no one is able to change the sprint backlog, which means that the requirements are frozen for a sprint.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication across all team members and disciplines that are involved in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach – accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.One of Scrum’s biggest advantages is that it is very easy to learn and requires little effort to start using.

Several roles are defined in Scrum, these are divided into two groups, pigs and chickens, based on a joke about a pig and a chicken.
A pig and a chicken are walking down a road. The chicken looks at the pig and says, “Hey, why don’t we open a restaurant?” The pig looks back at the chicken and says, “Good idea, what do you want to call it?” The chicken thinks about it and says, “Why don’t we call it ‘Ham and Eggs’?” “I don’t think so,” says the pig, “I’d be committed but you’d only be involved.”
So the pigs are committed to building software regularly and frequently, while everyone else is a chicken: interested in the project but really irrelevant because if it fails they’re not a pig, that is they weren’t the ones that committed to doing it. The needs, desires, ideas and influences of the chicken roles are taken into account, but not in any way letting it affect or distort or get in the way of the actual Scrum project.

“Pig” roles
Pigs are the ones committed to the project and the Scrum process, they are the ones with “their bacon on the line.”

  • Product Owner
    The Product Owner represents the voice of the customer. They ensure that the Scrum Team works with the right things from a business perspective. The Product Owner writes User Stories, prioritizes them, then places them in the Product Backlog.
  • ScrumMaster (or Facilitator)
    Scrum is facilitated by a ScrumMaster, whose primary job is to remove impediments to the ability of the team to deliver the sprint goal. The ScrumMaster is not the leader of the team (as they are self-organizing) but acts as a buffer between the team and any distracting influences. The ScrumMaster ensures that the Scrum process is used as intended. The ScrumMaster is the enforcer of rules.
  • Team
    The team has the responsibility to deliver the product. A small team of 5-9 people with cross-functional skills to do the actual work (designer, developer, tester, etc.).

“Chicken” roles
Chicken roles are not part of the actual Scrum process, but must be taken into account. An important aspect of an Agile approach is the practice of involving users, business and stakeholders into part of the process. It is important for these people to be engaged and provide feedback into the outputs for review and planning of each sprint.

  • Users
    The software is being built for someone!
  • Stakeholders (Customers, Vendors)
    The people that will enable the project and for whom the project will produce the agreed upon benefit(s) which justify it. They are only directly involved in the process at sprint reviews.
  • Managers
    People that will set up the environment for the product development organizations.


  • Product Backlog : It is a high-level document for the entire project. It contains broad descriptions of all required features, wish-list items, etc. It is the “What” that will be built. It is open and editable by anyone. It contains rough estimates, of both business value and development effort. Those estimates help the Product Owner to gauge the timeline and, to a limited extent, priority. The product backlog is property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
  • Sprint Backlog : It is a greatly detailed document containing information about how the team is going to implement the requirements for the upcoming sprint. Tasks are broken down into hours with no task being more than 16 hours. If a task is greater than 16 hours, it should be broken down further. Tasks on the sprint backlog are never assigned, rather tasks are signed-up for by the team members as they like. The sprint backlog is property of the Team. Estimations are set by the Team.
  • Burn Down : It is a publicly displayed chart showing remaining work in the sprint backlog. Updated every day, it gives a simple view of the sprint progress. It should not be confused with an earned value chart.

Following are some general practices of Scrum:

  • Customers become a part of the development team.
  • Like all other forms of agile software processes, Scrum has frequent intermediate deliveries with working functionality. This enables the customer to get working software earlier and enables the project to change its requirements according to changing needs.
  • Frequent risk and mitigation plans developed by the development team itself. – Risk Mitigation, Monitoring and Management (risk analysis) at every stage and with commitment.
  • Transparency in planning and module development – Let everyone know who is accountable for what and by when.
  • Frequent stakeholder meetings to monitor progress – Balanced (Delivery, Customer, Employee, Process) Dashboard updates – Stakeholders’ update – You have to have Advance Warning Mechanism.
  • No problems are swept under the carpet. No one is penalized for recognizing or describing any unforeseen problem.
  • Workplaces and working hours must be energized. – “Working more hours” does not necessarily mean “producing more output.”

The following terminology is used in Scrum:

    Product Owner :
    The person responsible for maintaining the Product Backlog by representing the interests of the stakeholders.
    ScrumMaster : 
    The person responsible for the Scrum process, making sure it is used correctly and maximizes it benefits.
    Team : 
    A cross-functional group of people responsible for managing itself to develop the product.
    Scrum Team :
    Product Owner, ScrumMaster and Team.
    Sprint Burn Down Chart : 
    Daily progress for a Sprint over the sprint’s length.
    Product Backlog :
    A prioritized list of high level requirements.
    Sprint Backlog : 
    A list of tasks to be completed during the sprint.
    Sprint : 
    A time period (typically between 2 weeks and 1 month) in which development occurs on a set of backlog items that the Team has committed to.
    Sashimi : 
    A slice of the whole equivalent in content to all other slices of the whole. For the Daily Scrum, the slice of sashimi is a report that something is done.

Though Scrum was originally applied to software development only, it can also be successfully used in other industries. Now Scrum is often viewed as an iterative, incremental process for developing any product or managing any work.

Scrum applied to product development
Scrum as applied to product development was first referred to in “The New Product Development Game” (Harvard Business Review 86116:137-146, 1986) and later elaborated in “The Knowledge Creating Company” both by Ikujiro Nonaka and Hirotaka Takeuchi (Oxford University Press, 1995). Today there are records of Scrum used to produce financial products, Internet products, and medical products by ADM.

Scrum as a marketing project management methodology
As marketing is often executed in project-based manner, a lot of generic project management principles apply to marketing. Marketing can be also optimized similar to project management techniques. Scrum approach to marketing is believed to be helpful for overcoming problems experienced by marketing executives. Short and regular meetings are important for small marketing teams, as every member of a team has to know what the others are working on and what direction the whole team is moving in. Scrum in marketing makes it possible to:

  • See possible problems at early stages and allows coping with them quicker and with minimal losses. According to the key principle of Scrum “no problems are swept under the carpet”, every team member is encouraged to describe the difficulties he is experiencing, as this might influence the work of the whole group.
  • Reduce financial risk. With the beginning of every sprint period, the business owner can change any of the marketing project parameters without penalty, including increasing investments to enlarge consumers’ quantity, reducing investments until unknowns are mitigated, or financing other initiatives.
  • Make marketing planning flexible. Short-term marketing plans based on sprints can be much more effective. Marketing managers get an opportunity to switch from one promotion method to another, if the first one proved to be unsuccessful during the sprint period. It also becomes easier to clarify due dates of every small, but important, task to each member of a team.
  • Involve clients in various ways.

There’s also a tendency to execute Scrum in marketing with the help of Enterprise 2.0 technologies and Project management 2.0 tools.


  • Schwaber, Ken (1 February 2004). Agile Project Management with Scrum, Microsoft Press. ISBN 978-0-735-61993-7.
  • Takeuchi, Hirotaka; Nonaka, Ikujiro (January-February 1986). “The New Product Development Game” (PDF). Harvard Business Review,
  • DeGrace, Peter, Stahl, Leslie Hulet (1 October 1990). Wicked problems, righteous solutions, Prentice Hall. ISBN 978-0-135-90126-7.
  • Sutherland, Jeff (October 2004). “Agile Development: Lessons learned from the first Scrum” (PDF).