Spam Filtering

Akismet had a DNS Outage. Of course, that is the reason so much span could get through, and so much of manual spam filtering was required. Akismet has become as crucial as the Web itself. Web would not be this efficient without tools like Akismet. Akismet is like the police, only quicker. Society would be a mess without them.

Technorati tags: ,

*OA – The Different Web Programming Paradigms

Update: A Spanish translation of this post is available, thanks to Adrián Moreno Peña.

Web has been the apex of networking, both from software and social perspective. The extreme software networking has turned Web into communication, social networking, collaboration, publishing platform as media and application programming platform. However, as mentioned earlier, Web was originally designed for documents as a way of information sharing. To be able to do what we wanted to do with Web, we had to make it programmable.

Programming The Web

The Web today is programmable. It can be programmed for various purposes, some of which we have mentioned in the beginning. This programmability is derived from the mature generic software programming principles. We need to have an architecture, a design and an implementation. The generic software programming has evolved through a lot of changes, some of which can qualify as improvements. From the programming styles like procedural programming the focus has moved on to architecture and design to produce techniques like OOP and the new Agile and Model Driven Architectures. On similar lines today the Web is full of upcoming architectures and approaches. Lets mull over some of them here.

Services

The concept of service was created to emphasize on loose coupling and a client-server relationship. The pre-Web software was usually tied to the hardware and the associated platforms. Web, being so open and ubiquitous, cannot afford to do that. Web was meant for sharing, irrespective of these restrictions. Hence came up the concept of service. A service is a function with a purpose, serves all the clients without any restriction on their implementation details.

Service Oriented Architecture (SOA)

Such collection of services were exposed that the clients could avail of and termed as Service Oriented Architecture (SOA). These services communicated with each other, some collaborated and some were standalone.

To be able to do a handshake, the clients had to obey the protocols mentioned by the service. These most popular ones were XML-RPC, SOAP. They focused on abstracting Web for applications and domains. A different approach was taken with REST which focused on using the Web as it is, by following its basic principles.

The advantage of SOA was that now businesses could choose between services without being hindered by technology or organizational boundary. Neither the definition nor the specifications of SOA were limited or dependent on Web. SOA could allow interesting mashups and integrations. SaaS is completely based on this and has been able to bring analogical outsourcing concept in businesses.

However, there are some key disadvantages of this approach. The biggest being that in an effort to be platform agnostic and portable, SOA is buried under a load of specifications. Increasingly it is getting difficult and costly to be able to comply with the protocols and talk to a service. Another disadvantage, which need not be severe sometimes, is that the services are not discoverable. Knowledge of the services is required to be able to use the service which mandates a directory of services. Since Web is boundless by nature, it is impossible to keep such a directory. This makes SOA less reachable.

Web Oriented Architecture (WOA)

To make SOA lighter and more popular came WOA. It is essentially a subset of SOA which recommends using REST over heavier counterparts like SOAP. The philosophy of REST to differentiate between network programming and desktop programming makes it simpler to use for the former.

WOA is more customized for Web by including REST. And by specializing it can strip off the heavy abstractions that make you all-inclusive.

Resource Oriented Architecture (ROA)

Here comes a radical approach, well, radical from the SOA perspective. Alex Bunardzic introduced ROA. While WOA is conceptually still SOA, ROA is a rebel for a good reason. Alex points out that the concept of services might not apply to the Web. As mentioned earlier, services cannot be discovered and it is not possible to maintain a catalog. This is where it goes against the Web, ROA believes that Web is exploratory by nature.

Because of the uniqueness of the web as a medium, the only abstraction that does it true justice is resource. The web is a collection of resources. These resources are astronomically diverse, and it would be mathematically impossible to maintain any semblance of a reasonable inventory of web resources.

Each resource on the web, no matter how unique and complicated it may be, obeys only one protocol. This protocol has three major aspects to it. In no particular order, these aspect are:

  1. Each resource knows how to represent itself to the consumer
  2. Each resource knows how to make a transition from one state to another state
  3. Each resource knows how to self-destruction

ROA is more of a paradigm than an architectural approach thats considers resources to be the elements of Web. The key part however, is that they can be discovered, and once they are discovered they can represent themselves. There is no requirement of prior knowledge of the resource to start a conversation, as against knowing capabilities of a service in SOA. ROA is completely based on REST and basks in its advantages – simplicity, minimal technical requirements and URI for every resource. The use of the basic elements of the original WWW makes it easy for a resource to talk to another resource.

The only disadvantage I see of ROA is that it is well defined for Web. Although there can be analogical implementations in other areas, like SOA it is not conceptualized on non-Web platforms. There are new developments happening in this area, but it is still not as mature as SOA.

Epilogue

If analyzed, all these focus on having a standardized interface. ROA is simpler than SOA and uses the hyperlinks effectively to reach a wider base. But whether that is a requirement will be determined by the business need.

As a software developer what is in store for me in all this? Well, these paradigms are about to define the direction in which Web programming will head in the future. The one that dominates will survive. However to be dominant it will have to prove itself to be loyal to Web and loyal to businesses. If both co-exist, it will be critical to identify applicability of each of these. If not, then there will have to be preparations to handle its disadvantages. Either way, these will affect the businesses in which they are being used. And with Web playing a very important role today, this impact will not be ignorable!

More readings on this:

Technorati tags: , , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 2 Comments »

Should The Web REST Its Case?

Today the Web is being treated as an application and messaging platform, as a publishing platform and as a medium. However, the initial intent and hence the design of Web was to host documents and make them available to everyone. Here is an excerpt from the summary of World Wide Web by Tim Berners-Lee:

The WWW world consists of documents, and links. Indexes are special documents which, rather than being read, may be searched. The result of such a search is another (“virtual”) document containing links to the documents found. A simple protocol (“HTTP”) is used to allow a browser program to request a keyword search by a remote information server.

The web contains documents in many formats. Those documents which are hypertext, (real or virtual) contain links to other documents, or places within documents. All documents, whether real, virtual or indexes, look similar to the reader and are contained within the same addressing scheme.

In a nutshell, the Web was intended for documents so that information can be shared. The design of the Web and underlying techniques like the HyperText Transmission Protocol (HTTP) and HTML target these hyperlinked documents and exclude the modern connotations.

Protocols

SOAP and XML-RPC

To be able to do more with the Web, a layer of abstraction was introduced. This layer introduced new protocols and data structure formats and new rules to abide by. XML-RPC is a product of this attempt and later evolved into SOAP to handle enterprise scenarios. Lets club both of them together for our purpose. The purpose of these protocols was to ensure communication between disparate machines with disparate platforms with disparate programming environments. That they did to the fullest extent. The utilities were offered as services, which the clients could use by requesting using the protocols. SOAP has now evolved to get more and more inclusive and stricter and tedious. A lot of specifications were developed which caused the effort and the cost of being able to use a service to climb up.

There are two problems with SOAP. One was that as Web was being used for all kinds of things, a lot of which were not enterprise or corporate. SOAP started getting oversized and bulky for them. Secondly, SOAP uses POST method of HTTP. (HTTP provides two commonly used methods – GET and POST. GET lets you retrieve information and provides a Uniform Resource Identifier (URI) for that. This URI can be used as an identifier for that information or resource. For using POST, a package has to be sent to the web server, a simple URI does not work.) Using POST meant SOAP had to do away with the URI and the associated basic benefits of simplicity, convenience and caching.

REST

So came in a new perspective REST – Representational State Transfer. REST, coined by Roy Fielding in Architectural Styles and the Design of Networkd-based Software Architectures takes an approach contrary to SOAP. Instead of building over the basics of Web, it tries to optimise it as it is. It uses GET to request information, and idetnfiies every single resource with a URI. This URI can now be used by anyone anywhere, a simple string that can identify and locate a resource on the Web. Not additional protocols other than HTTP and use the URIs that form the hyperlinks. Keep it simple and keep it accessible, very much goes with the ideology behind the WWW summary.

With the emergence of Web 2.0, there was search for an easier and open paradigm for using Web, which was found in REST. I am with REST for now. However I am not sure if it will be an over simplification for some problems. Only time will tell!

The RSS Blog has a good illustration of all the three protocols. I sometimes wonder if a combination of these protocols would provide a better solution in some cases. A lot of discussions end up in flaming and finger pointing. However there are some good thoughts on these:

Non-participation by W3C

It would be better if the World Wide Web Consortium (W3C) participates in creation of these paradigms and protocols. W3C is the authority and can play the role of keeping the integrity of basic principles of Web. Existence of multiple protocols is a bigger problem than just development inconvenience. It can divide the Web into incompatible parts which will ultimately be the failure of WWW.

Technorati tags: , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 3 Comments »

Web Design – Art Or Engineering?

This has been the single most troubling query for me since I have arrived in the Web arena. The systems programming and application programming are more tilted towards the engineering aspect, applying engineering basics for designing the UI (User Interface). However, because Internet is being treated as media rather than a platform, art has more scope here. I have seen the Photoshop guys and the Content guys at each other’s neck to own the design. Who gets the credit?

Content or Graphics?

My engineering background biases me towards that. On the web Content is the King. Give importance to the content identification, information architecture, user profiling and then design. User a Content Management System. A web site should support the standards, should be usable, accessible (at least to its intended audience) and more importantly secure. But it cannot be just this! In today’s competition for the top birth, the graphic design plays an important role. Users are not ready to go with anything that is drab and already done. It has to be fresh, with new ideas. And it has to be usable, accessible – wow am I going in circles?

Tommy Olsson of Accessites.org analyzes the two approaches two designers take – Visual and Structural and attempts at a possible solution. The primary difference is that the structural design will flow with the content, whereas the visual design will end up filling up spaces with content. The structural approach can end up looking looking boring and too engineered. Whereas, like Tommy mentions, visual approach can put less focus on the usability and accessibility aspects. He goes on to speculate

Why, then, is the visual approach so much more prevalent than the structural? One reason is that most people think visually, especially when it comes to web design. Many also find abstract thinking very difficult, and abstract thinking is required for the structural design approach. Furthermore, visual designers believe that starting with the content will impose limitations on the design possibilities. The main reason, of course, is most likely that many designers use WYSIWYG tools like Dreamweaver or FrontPage, which are design-centric to the extreme.

That is the key, the either parties end up using tools which are design-centric to the extreme. The visual designers see content as an impediment and the structural ones will view graphical design as a restriction. One thing is sure that today both are important.

Tommy wonders if both the visual and structrual designers having equal in HTML, CSS, usability, accessibility and graphic design will design visually identical designs. Practically, it will be difficult to find this, and even if it is done, the design will change depending on whether you focus first on the graphics or first on the content. Ideally they should be done by the corresponding domain experts and then both should be blended together.

Both

Will it not be great if both of them sit together and sort out the issue? Instead of stubborn designs on both sides, can there be design ideas and a brainstorming session to materialize the ideas. Both parties can contribute in each other’s designs from their perspective. It can become imperative, in fact, in cases where graphics is part of the content. A case to consider is when putting up images the art will focus more on colors and textures, whereas the engineering will consider impact of the images on the size and performance. Which of these has more importance probably depends on the type of the website and the type of the target audience. I would tend to invest in structural approach when designing for a news paper, however, the weight can be heavier for the visual approach when designing for an art gallery.

Ultimately, the resulting website is a blend of both, so they have to be treated together and approved togther. There is no one-upmanship. Web design is both art and engineering, and what the user should see is a balance between the two.

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Web, JavaScript And Security

JavaScript is now main stream, thanks to the popularity and extensive acceptance of AJAX. In fact, AJAX is considered to be a core part of Web 2.0.

Acceptance of a technology by the industry has been a subject of its scanning under the security microscope, which has caused delays in accepting new things. JavaScript seemed to follow the same road, unless AJAX came around. AJAX gives this wonderful capability of behind-the-scenes requests to keep the web page dynamic, and make it more userfriendly and attractive to the user.

JavaScript has matured, however, not its security model. JavaScript opens doors to browser-based attacks. This may sound as the same old crib against scripting, but delve a little more in the side-channel attacks and the real danger surfaces:

“We have discovered a technique to scan a network, fingerprint all the Web-enabled devices found and send attacks or commands to those devices,” said Billy Hoffman, lead engineer at Web security specialist SPI Dynamics. “This technique can scan networks protected behind firewalls such as corporate networks.”

The popular mode of attacks today is by exploiting the different browser vulnerabilties. But, JavaScript can now get inside your network. Once inside the network JavaScript can attack any IP enabled device, including server, routers or printers. This is no more limited to the user’s machine, the danger expands to the entire network, including the corporate ones. Along with the Web 2.0, these attack strategies too will mature and the new websites can end up being haven for the hackers end up in another cat and mouse game.

The good thing about seamless integration with scripting turns into evil as the user will never know if his/her machine or network has been attacked or not. Unless, the user is knowledgable enough to set the security to the right level. Every computer user cannot be expected of knowing the JavaScript vulnerabilities or keep his/her antennas on for staying alert to JavaScript problems. It will beat out the productivity, which is the ultimate purpose of using computers.

Security makes it difficult

Various new web frameworks have come up which allow easy AJAX integration and build sites quickly. However, if the different vulnerabilities are considered, it is not easy any more. Consider the cross-site scripting, cross-zone scripting or the new dangers of JavaScript.

Security does not figure in many applications as one of the primary requirements. Either the client is not very interested or even if i is considered its cost might turn it into a good-to-have feature. Many a times, a project starts with a reduced scope where the security is not urgent and is ignored. However, the project evolves with time and then it is more difficult and expensive to make it secure. Today, Web 2.0 is headed that way.

Solutions?

Disabling JavaScript is the instant reactive solution to this problem, however it not practical. Today scripting is ubiquitous. The solution lies in preventing hacks not avoiding scripting. Incorporating security in the JavaScript design involves changing its model which entails changing almost every web application today which might take time. The solution has to be a two-way approach – a policy based solution and an effort to improve scripting environments.

Clients, designers, developers, browsers – the whole industry should accept policy based decisions to avoid hacking. It would be perfect if there would be a way of differentiating between good-intentioned and malicious code. Maybe there can be certifications to certify non-malicious code. Ted Dziuba presents a novel approach, though a little critical, by differentiating between a document and an application.

Indeed, JavaScript is useful when the main purpose of your work is an application. When you are presenting information, however, there should be no JavaScript between the user and that information. As I said earlier: we as developers have an obligation to the rest of the internet to classify our work as either document or application. So, the next time you think that having your entire web site as one page with AJAX controls, please, think of the crawlers.

Software creators should focus on security along with the quick and easy rush. Make the web site secure and safe along with making it dynamic, interactive and flashy.

The industry needs to hold back a bit, focus on the JavaScript vulenrabilities, prepare for it and then get gung-ho about it.

Technorati tags: , , , , ,

Copyright Abhijit Nadgouda.

Interview With Usability Guru

Usability guru Jakob Nielsen was interviewed (via Ajaxian) on usability and its relation to advertising.

One of the things that come out of it are the applications of AJAX.

It’s important to remember that most web sites are not used repeatedly. Usually, users will visit a given page only once. This means that the efficiency of any given operation takes a back seat to the discoverability and learnability of the feature. Therefore, interaction techniques like drag-and-drop should almost never be used on web sites. Instead, focus on showing a few important features and making them accessible through a single click on a simple link or button.

Some business sites that are used repeatedly include features for approximating software applications. Online banking comes to mind, and I can easily envision a design that enables the user to see the front or back of a check through an AJAX technique on the account statement page, instead of going to a new page.

Do we simplify or in fact complicate the interaction by adding advanced features? This is a question to consider whenever any feature is being implemented. Ideally, any feature should be implemented such that it supports maximum number of platforms and browsers without any third party plugins.

He also mentions the classic problem of doing technology for technology’s sake. It is important to realise that the software being developed is for the customer and no one else.

Remember: just because you love technology and advanced features, it doesn’t mean that your customers do. They just want to get in and out without worrying about your web site. So take it easy on the features.

I agree in entirety that adding features or programming tricks turn into just gimmicks if they are not targeted to be used. One of the best ways of identifying the required features is to identify their usage – who, why, how and when. If there are concrete answers to these questions that feature becomes an important one. Rest fall in the category of gimmicks. Every Web 2.0 company talks about AJAX today, but how many talk about using it for improving the website?

He also talks about usability and conversion rates. Usability is an important factor, it makes the user comfortable and that drives traffic. Advertising can get a user to the website, however the conversion to a frequent user will depend on the content and how usable the website is to consume the content.

Technorati tags: ,

Copyright Abhijit Nadgouda.

Get More Accessible

This post is not about the commonly discussed and basic accessibility issues. They are very well covered by the Web Accessibility Initiative (WAI). This is about adding the last straws to get closer to being accessible by doing a design with that intention.

Skip Links

Skip Links function as navigators within the web page being described. They are required so that a person can navigate through the structure of the page with minimal clicks. They turn out to be an issue of accessibility for those who cannot scroll or move through the page because of mobility problems. And they are also a usability issue for the users with less than efficient tools for navigation, like mobile users.

An classic demonstration is at the 456 Berea Street site. The topmost links fall in the category of the skip links which can be used by users to skip to a specific part of the page. Since these links become part of the design itself there are various ways of including them, one of which has been discussed by the Accessites article. It discusses a way of hiding the skip links from the normal users but making them available to the screenreaders or on demand. You can try it out on the site as Mike Cherim says:

I use an off-screen method, typically taking an unordered list and sending it a few thousand pixels into the darkness off-left — using the display property none should be avoided to ensure access to screen reader users. Then, one-by-one, employing a:focus (or a:active for IE users) in the CSS, I bring the anchors, not the list items, into view. In the interest of a best practice, I recommend locating them, when viewable, in the upper-left or across the top, giving them a good background and enough positive z-index in the CSS to ensure they stand out. An example of this is available right here on this site. Press Tab a couple of times to see the available skip links in action.

As you can see, on accessites.org the skip links are provided to jump to even different types of information like accessibility information. However, hiding the links falls into the arena of usability which might not approve of it. The article very nicely highlights the importance of skip links and why they should be handled by the developers today to compensate for lack of standardisation in the user agents (browsers).

Whichever way they are included, skip links provide the last mile of accessibility. The fun part is that they are not at all difficult to implement. All they need are anchor names or bookmarks as they are called.

CSS for multiple media

As part of the theme development Cascading Style Sheets should be developed for multiple media – screen, print, aural and other recognized media types.

Alternate High Contrast Theme

Providing a high contrast alternate can make your site more accessible to visually challenged users. Again, using 456 Berea Street as an example, the link in the top right corner – Switch to high contrast layout – does that. For some reason this option is not employed in many sites when this is the most direct and fruitful way of making a site accessible.

Implementation In WordPress

Since WordPress is a popular blogging tool (and one of my favorites), lets use it to see how we can implement the discussed points.

The skip links themselves are nothing but links to specific parts of the page which, as mentioned earlier, can be implemented by using HTML links. They should typically be placed in a location which can be accessed without any additional effort, something like top-level navigation. Once the different parts of the page are identified, mark them up and change the theme to include the links, e.g., header.php can be modified to include the skip links.

WordPress supports CSS to the fullest extent, and supports CSS for media other than the default for screen. It is only a prerogative of the designer to provide it, WordPress does not cause any hindrance.

Switching to the alternate high contrast theme can be provided using the popular theme switcher plugin. The theme switcher temporarily changes the theme using cookies. You can modify the wp_theme_switcher() function to provide link to the alternate high contrast theme. Of course a high contrast theme has to be developed first to be able to implement this. This is something that probably designers should practice, provide a companion high contrast theme along with every theme they develop.

WordPress accessibility has been studied a lot. Here are some good resources:

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Let Your Readers Listen To You

The purpose of using CSS to design web pages is to separate style from the content. This enables delivering the same content in multiple formats, through multiple media. CSS does go all the way to support different media and this is illustrated by a lot of web sites today by using different styles for the print and screen media. They have a often ignored and rarely talked about kin – the audio medium. Joshua Briley visits aural stylesheets to propose them not only for the sake of accessibility, but also usability.

Imagine being in a vehicle, listening to your GPS program. Wouldn’t it be nice if street names were spoken louder and slower, making the names easier to understand? Wouldn’t it also be nice to select a voice that speaks in a frequency range that is comfortable to hear? With aural style sheets, these options are already a reality.

W3.org explains the aural stylesheet properties and attributes. In a nutshell

  • Volume properties
  • Speaking properties
  • Pause properties
  • Cue properties
  • Mixing properties
  • Spatial properties
  • Voice characteristic properties
  • Speech properties

These are very much analogous to the other properties in other media, e.g., voice-family attribute is similar to the font-family attribute. However, the sound medium is richer than the text because it can be three-dimensional and more than one voices can be involved. The spatial properties can be used to position the different voices which not only differentiates between them but also reflects the space. Everything that can do with the voice and speech can be specified using these properties.

This makes the aural stylesheets an important tool that can transform a web page into a presentation. They are useful and convenient in lot more cases, e.g., for children, in education or reading scripts; but they are still not popular. As Joshua points out, the reason is the less than minimal support in the popular browsers. However, we had seen this with earlier versions of CSS too, and the push has to come from the users. History has shown that unless the users start using and demanding a feature, neither are the developers going to incorporate it nor will the browsers. The reason why users don’t see it can be lack of information or lack of comprehensibility. The pioneers and leaders have to educate and promote the aural stylesheets and illustrate their advantage.

Lets hope it happens soon and our readers can start listening to us.

Technorati tags: , , , ,

Copyright Abhijit Nadgouda.

Posted in web. 1 Comment »

Adaptive Websites – The Future Of Web

Web 2.0 has ushered in a new era of democratic usage of the Web. It is more focused on the user than its earlier version. This has pushed much more information, in many more formats, on the Web.

The Problem

Web is a major source of information today. However, it is also a source of information overload. It is not only the user generated content, but also professional publications like newspapers and magazines taking the online route. In addition to the stiff competition in the online businesses, Web is continuously changing and adapting to the demand by diverse users to display more relevant content. What is the best way to handle this? The answer lies in the not frequently mentioned concept of Adaptive Websites.

The First Bite

Carolyn Wei explains the concept by using Amazon.com as an example.

Adaptive websites use data provided by users and monitor their actions on the website to customize the content and layout that will interest the user, e.g., Lonely Planet could display more relevant weekend getaways by considering the user’s location or by understanding the typesof getaways prefered.

No, the My Yahoo! like portals are not adaptive websites, they allow the user to personalize content, but the onus is on the user. A website, by being adaptive, learns from its usage, learns from user’s experiences and adapts itself. The biggest difference in the two is the ability to learn and adapt. The result can transform the view including changes in the layout along with the content.

The Meal

To be able to serve a user, adaptive websites build the user information within themselves, called user models. The user models depend on two types of information:

  • information provided by the user voluntarily
  • information gathered by the website over a period of time of usage

The earlier one might comprise age, location, gender, profession or other deterministic factors, whereas the latter is more of an experience out of multiple interactions with the website. Users leave breadcrumbs of their visits, which can used to build information about their interests or likes. Sometimes the navigation options used by them can provide more information about related content or popular content.

However, it is difficult to track every single user like this. Adaptive websites use the technique of clustering to group users and build user models for that. Whenever a user visits the website, the cluster is identified and the corresponding user model is loaded.

To keep on improving the user model, continuous monitoring, data logging and mining of the log is required, which can affect the performance. It is possible that the user does not provide accurate voluntary information, in which case the adaptation will fail. It is therefore important that the user is explicitly told about significance of the information.

To think of it, this concept is applied in lot of places – some websites display the appropriate language depending on my language preference or the IP address in the HTTP header (internationalization and localization). Rojo, a feed reader, asks the user about his/her interests and provides the popular feeds in those interest areas. However this concept has to be applied to a much wider aspect of the website – its design and information architecture.

The Digest

Today, with online overtaking print, blogs being used for businesses and networking, and websites getting as common as the common man, it is important that the websites now start understanding the users rather than just recording them. The current path leads to Adaptive Websites which will adapt themselves for the user.

Technorati tags: , , ,

Copyright Abhijit Nadgouda.

Is Internet Fueling Collectivism?

An amazing essay by Jaron Lanier about Digital Maoism delves into the behavior of collectivism and whether the Internet Age is fueling it. However, the core subject is much deeper than the digital world encouraging mob madness. It is about whether we can benefit from groupism and Wisdom of Crowds, and if so when.

As Lanier mentions, this collectivism is seen in many places, through participation of individuals. The American Idol (or Indian Idol), elections, Wikipedia and stock markets are some examples. However, there is primary difference in Wikipedia and the American Idol model. Wikipedia nurtures objective factual information, as against subjective opinions. The fact remains the fact and its truth is not influenced by who has written about it or how many have written about it. The strength of a group can influence how much information is available. What Wikipedia has done is make this immense information available by using a open group of authors rather than a closed one. If a piece of information is not accurate or right, it can be re-edited to reflect the truth. In other words, inaccurate information by authors is corrected by its accurate version. They are written by humans, errors are possible and will be corrected. However, the probability of correction improves with the number of people involved. It is not chaos, it is a system where inaccurate information is replaced by the accurate one which can be verified because of the objectiveness.

Wikipedia, an aggregator, does not mean that the individual authorities are undermined. They still are respected and are read. However, Wikipedia serves just as a singular storage of information and not is not intelligent in itself. But it serves as one of the best references to have.

On the other hand, American Idol model works on gathering votes based on subjective opinions and judgements from the common man. The tragedy is that people who do not have enough knowledge can not only participate but their vote gets counted, whether any other knowledgable person votes or not. Usually, a minority of the population is an expert on any subject. However, with votes being counted, the majority is always going to overrule the minority thereby throwing away the argument for merit. Surprisingly, this model has been replicated in many countries and even there the model is raking in money, which seems to be the primary goal. I haven’t seen the Indian Idols chosen improving their musical or performance abilities. However, the average talent that comes out of the model has improved, now there are more performers have exposure than before. The average beats the best!

While I agree that American Idol probably has the best business model and it brilliantly exploits the mob mentality, it is not a good example of collectivism bringing out the best. If I say, I don’t like a certain painting, that does not mean that the painting is not good, it is my subjective opinion, and the painting itself should not judged by it. A couple of years later I might change my perception and take a liking to the same artifact.

And there is also the blogosphere. Isn’t it an another form of collectivism? What blogosphere, as a whole, does is bring a subject in view of many others. Frankly, I would not have read this essay if it was not for the blogosphere. However, this neither means that the subject is important nor does it say whose opinions are important. It is entirely the onus of the individual to act and how to act on the information. This probably applies even in the stock market. The mob mania causes spikes or trenches in the sensex graph, but it is momentary. Whether to react to it or not will depend on the individual investor.

Elections are probably true processes of democratic participation by individuals to form a collective voice. However, there are multiple instances that they have completely failed. They have failed, not because collectivism does not work here, but maybe because the citizens did not have enough information, or enough did not participate or that they were completely rigged. However, this is another example, where the total number of votes might not bring out the best.

While discussing with a friend, what also came up was that not only the subjective opinion but also aspect of the subject might matter sometimes. In a software, the users opinions count a lot when its usability is being tested. I wonder how it will work if they are asked about the software engineering or the software process used for it. However, I see it works for usability as it is targetted towards the users and hence they are the best candidates for opinions on it. In this case, the proportion of the mass that reacted or opined is important.

Now to the more resident issue – is the internet fueling collectivism? We are seeing more and more aggregator models being used by businesses. The aggregator is a matter of convenience rather than intelligence. For example, it is convenient to read everything in one single place. Why they have worked in business models because they supply convenience, which is in demand by the users. If you look at Slashdot, it has been brilliant in reporting news that were not available in many places earlier. Sometimes even the discussions provide lot of value, but it never tries to snatch the credit or highlight from the original article itself. The aggregator model can build intelligence over and above the one provided by an individual, which is not harmful. In fact, collaboration between groups of people has also led to generative internet and community marketing as in case of Mozilla Firefox.

Whether collectivism is good or not, whether it works or not is more dependent on the subject it is applied for, and what it is used for. It, by itself, is not good or bad, its usage is. By nature, even this post is again just another opinion of an individual, probably even a subjective opinion and should not be counted as a vote.

Technorati tags: , , ,

Copyright Abhijit Nadgouda.