Spam Filtering

Akismet had a DNS Outage. Of course, that is the reason so much span could get through, and so much of manual spam filtering was required. Akismet has become as crucial as the Web itself. Web would not be this efficient without tools like Akismet. Akismet is like the police, only quicker. Society would be a mess without them.

Technorati tags: ,

*OA – The Different Web Programming Paradigms

Update: A Spanish translation of this post is available, thanks to Adrián Moreno Peña.

Web has been the apex of networking, both from software and social perspective. The extreme software networking has turned Web into communication, social networking, collaboration, publishing platform as media and application programming platform. However, as mentioned earlier, Web was originally designed for documents as a way of information sharing. To be able to do what we wanted to do with Web, we had to make it programmable.

Programming The Web

The Web today is programmable. It can be programmed for various purposes, some of which we have mentioned in the beginning. This programmability is derived from the mature generic software programming principles. We need to have an architecture, a design and an implementation. The generic software programming has evolved through a lot of changes, some of which can qualify as improvements. From the programming styles like procedural programming the focus has moved on to architecture and design to produce techniques like OOP and the new Agile and Model Driven Architectures. On similar lines today the Web is full of upcoming architectures and approaches. Lets mull over some of them here.

Services

The concept of service was created to emphasize on loose coupling and a client-server relationship. The pre-Web software was usually tied to the hardware and the associated platforms. Web, being so open and ubiquitous, cannot afford to do that. Web was meant for sharing, irrespective of these restrictions. Hence came up the concept of service. A service is a function with a purpose, serves all the clients without any restriction on their implementation details.

Service Oriented Architecture (SOA)

Such collection of services were exposed that the clients could avail of and termed as Service Oriented Architecture (SOA). These services communicated with each other, some collaborated and some were standalone.

To be able to do a handshake, the clients had to obey the protocols mentioned by the service. These most popular ones were XML-RPC, SOAP. They focused on abstracting Web for applications and domains. A different approach was taken with REST which focused on using the Web as it is, by following its basic principles.

The advantage of SOA was that now businesses could choose between services without being hindered by technology or organizational boundary. Neither the definition nor the specifications of SOA were limited or dependent on Web. SOA could allow interesting mashups and integrations. SaaS is completely based on this and has been able to bring analogical outsourcing concept in businesses.

However, there are some key disadvantages of this approach. The biggest being that in an effort to be platform agnostic and portable, SOA is buried under a load of specifications. Increasingly it is getting difficult and costly to be able to comply with the protocols and talk to a service. Another disadvantage, which need not be severe sometimes, is that the services are not discoverable. Knowledge of the services is required to be able to use the service which mandates a directory of services. Since Web is boundless by nature, it is impossible to keep such a directory. This makes SOA less reachable.

Web Oriented Architecture (WOA)

To make SOA lighter and more popular came WOA. It is essentially a subset of SOA which recommends using REST over heavier counterparts like SOAP. The philosophy of REST to differentiate between network programming and desktop programming makes it simpler to use for the former.

WOA is more customized for Web by including REST. And by specializing it can strip off the heavy abstractions that make you all-inclusive.

Resource Oriented Architecture (ROA)

Here comes a radical approach, well, radical from the SOA perspective. Alex Bunardzic introduced ROA. While WOA is conceptually still SOA, ROA is a rebel for a good reason. Alex points out that the concept of services might not apply to the Web. As mentioned earlier, services cannot be discovered and it is not possible to maintain a catalog. This is where it goes against the Web, ROA believes that Web is exploratory by nature.

Because of the uniqueness of the web as a medium, the only abstraction that does it true justice is resource. The web is a collection of resources. These resources are astronomically diverse, and it would be mathematically impossible to maintain any semblance of a reasonable inventory of web resources.

Each resource on the web, no matter how unique and complicated it may be, obeys only one protocol. This protocol has three major aspects to it. In no particular order, these aspect are:

  1. Each resource knows how to represent itself to the consumer
  2. Each resource knows how to make a transition from one state to another state
  3. Each resource knows how to self-destruction

ROA is more of a paradigm than an architectural approach thats considers resources to be the elements of Web. The key part however, is that they can be discovered, and once they are discovered they can represent themselves. There is no requirement of prior knowledge of the resource to start a conversation, as against knowing capabilities of a service in SOA. ROA is completely based on REST and basks in its advantages – simplicity, minimal technical requirements and URI for every resource. The use of the basic elements of the original WWW makes it easy for a resource to talk to another resource.

The only disadvantage I see of ROA is that it is well defined for Web. Although there can be analogical implementations in other areas, like SOA it is not conceptualized on non-Web platforms. There are new developments happening in this area, but it is still not as mature as SOA.

Epilogue

If analyzed, all these focus on having a standardized interface. ROA is simpler than SOA and uses the hyperlinks effectively to reach a wider base. But whether that is a requirement will be determined by the business need.

As a software developer what is in store for me in all this? Well, these paradigms are about to define the direction in which Web programming will head in the future. The one that dominates will survive. However to be dominant it will have to prove itself to be loyal to Web and loyal to businesses. If both co-exist, it will be critical to identify applicability of each of these. If not, then there will have to be preparations to handle its disadvantages. Either way, these will affect the businesses in which they are being used. And with Web playing a very important role today, this impact will not be ignorable!

More readings on this:

Technorati tags: , , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 2 Comments »

Should The Web REST Its Case?

Today the Web is being treated as an application and messaging platform, as a publishing platform and as a medium. However, the initial intent and hence the design of Web was to host documents and make them available to everyone. Here is an excerpt from the summary of World Wide Web by Tim Berners-Lee:

The WWW world consists of documents, and links. Indexes are special documents which, rather than being read, may be searched. The result of such a search is another (“virtual”) document containing links to the documents found. A simple protocol (“HTTP”) is used to allow a browser program to request a keyword search by a remote information server.

The web contains documents in many formats. Those documents which are hypertext, (real or virtual) contain links to other documents, or places within documents. All documents, whether real, virtual or indexes, look similar to the reader and are contained within the same addressing scheme.

In a nutshell, the Web was intended for documents so that information can be shared. The design of the Web and underlying techniques like the HyperText Transmission Protocol (HTTP) and HTML target these hyperlinked documents and exclude the modern connotations.

Protocols

SOAP and XML-RPC

To be able to do more with the Web, a layer of abstraction was introduced. This layer introduced new protocols and data structure formats and new rules to abide by. XML-RPC is a product of this attempt and later evolved into SOAP to handle enterprise scenarios. Lets club both of them together for our purpose. The purpose of these protocols was to ensure communication between disparate machines with disparate platforms with disparate programming environments. That they did to the fullest extent. The utilities were offered as services, which the clients could use by requesting using the protocols. SOAP has now evolved to get more and more inclusive and stricter and tedious. A lot of specifications were developed which caused the effort and the cost of being able to use a service to climb up.

There are two problems with SOAP. One was that as Web was being used for all kinds of things, a lot of which were not enterprise or corporate. SOAP started getting oversized and bulky for them. Secondly, SOAP uses POST method of HTTP. (HTTP provides two commonly used methods – GET and POST. GET lets you retrieve information and provides a Uniform Resource Identifier (URI) for that. This URI can be used as an identifier for that information or resource. For using POST, a package has to be sent to the web server, a simple URI does not work.) Using POST meant SOAP had to do away with the URI and the associated basic benefits of simplicity, convenience and caching.

REST

So came in a new perspective REST – Representational State Transfer. REST, coined by Roy Fielding in Architectural Styles and the Design of Networkd-based Software Architectures takes an approach contrary to SOAP. Instead of building over the basics of Web, it tries to optimise it as it is. It uses GET to request information, and idetnfiies every single resource with a URI. This URI can now be used by anyone anywhere, a simple string that can identify and locate a resource on the Web. Not additional protocols other than HTTP and use the URIs that form the hyperlinks. Keep it simple and keep it accessible, very much goes with the ideology behind the WWW summary.

With the emergence of Web 2.0, there was search for an easier and open paradigm for using Web, which was found in REST. I am with REST for now. However I am not sure if it will be an over simplification for some problems. Only time will tell!

The RSS Blog has a good illustration of all the three protocols. I sometimes wonder if a combination of these protocols would provide a better solution in some cases. A lot of discussions end up in flaming and finger pointing. However there are some good thoughts on these:

Non-participation by W3C

It would be better if the World Wide Web Consortium (W3C) participates in creation of these paradigms and protocols. W3C is the authority and can play the role of keeping the integrity of basic principles of Web. Existence of multiple protocols is a bigger problem than just development inconvenience. It can divide the Web into incompatible parts which will ultimately be the failure of WWW.

Technorati tags: , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 3 Comments »

Web Design – Art Or Engineering?

This has been the single most troubling query for me since I have arrived in the Web arena. The systems programming and application programming are more tilted towards the engineering aspect, applying engineering basics for designing the UI (User Interface). However, because Internet is being treated as media rather than a platform, art has more scope here. I have seen the Photoshop guys and the Content guys at each other’s neck to own the design. Who gets the credit?

Content or Graphics?

My engineering background biases me towards that. On the web Content is the King. Give importance to the content identification, information architecture, user profiling and then design. User a Content Management System. A web site should support the standards, should be usable, accessible (at least to its intended audience) and more importantly secure. But it cannot be just this! In today’s competition for the top birth, the graphic design plays an important role. Users are not ready to go with anything that is drab and already done. It has to be fresh, with new ideas. And it has to be usable, accessible – wow am I going in circles?

Tommy Olsson of Accessites.org analyzes the two approaches two designers take – Visual and Structural and attempts at a possible solution. The primary difference is that the structural design will flow with the content, whereas the visual design will end up filling up spaces with content. The structural approach can end up looking looking boring and too engineered. Whereas, like Tommy mentions, visual approach can put less focus on the usability and accessibility aspects. He goes on to speculate

Why, then, is the visual approach so much more prevalent than the structural? One reason is that most people think visually, especially when it comes to web design. Many also find abstract thinking very difficult, and abstract thinking is required for the structural design approach. Furthermore, visual designers believe that starting with the content will impose limitations on the design possibilities. The main reason, of course, is most likely that many designers use WYSIWYG tools like Dreamweaver or FrontPage, which are design-centric to the extreme.

That is the key, the either parties end up using tools which are design-centric to the extreme. The visual designers see content as an impediment and the structural ones will view graphical design as a restriction. One thing is sure that today both are important.

Tommy wonders if both the visual and structrual designers having equal in HTML, CSS, usability, accessibility and graphic design will design visually identical designs. Practically, it will be difficult to find this, and even if it is done, the design will change depending on whether you focus first on the graphics or first on the content. Ideally they should be done by the corresponding domain experts and then both should be blended together.

Both

Will it not be great if both of them sit together and sort out the issue? Instead of stubborn designs on both sides, can there be design ideas and a brainstorming session to materialize the ideas. Both parties can contribute in each other’s designs from their perspective. It can become imperative, in fact, in cases where graphics is part of the content. A case to consider is when putting up images the art will focus more on colors and textures, whereas the engineering will consider impact of the images on the size and performance. Which of these has more importance probably depends on the type of the website and the type of the target audience. I would tend to invest in structural approach when designing for a news paper, however, the weight can be heavier for the visual approach when designing for an art gallery.

Ultimately, the resulting website is a blend of both, so they have to be treated together and approved togther. There is no one-upmanship. Web design is both art and engineering, and what the user should see is a balance between the two.

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Web, JavaScript And Security

JavaScript is now main stream, thanks to the popularity and extensive acceptance of AJAX. In fact, AJAX is considered to be a core part of Web 2.0.

Acceptance of a technology by the industry has been a subject of its scanning under the security microscope, which has caused delays in accepting new things. JavaScript seemed to follow the same road, unless AJAX came around. AJAX gives this wonderful capability of behind-the-scenes requests to keep the web page dynamic, and make it more userfriendly and attractive to the user.

JavaScript has matured, however, not its security model. JavaScript opens doors to browser-based attacks. This may sound as the same old crib against scripting, but delve a little more in the side-channel attacks and the real danger surfaces:

“We have discovered a technique to scan a network, fingerprint all the Web-enabled devices found and send attacks or commands to those devices,” said Billy Hoffman, lead engineer at Web security specialist SPI Dynamics. “This technique can scan networks protected behind firewalls such as corporate networks.”

The popular mode of attacks today is by exploiting the different browser vulnerabilties. But, JavaScript can now get inside your network. Once inside the network JavaScript can attack any IP enabled device, including server, routers or printers. This is no more limited to the user’s machine, the danger expands to the entire network, including the corporate ones. Along with the Web 2.0, these attack strategies too will mature and the new websites can end up being haven for the hackers end up in another cat and mouse game.

The good thing about seamless integration with scripting turns into evil as the user will never know if his/her machine or network has been attacked or not. Unless, the user is knowledgable enough to set the security to the right level. Every computer user cannot be expected of knowing the JavaScript vulnerabilities or keep his/her antennas on for staying alert to JavaScript problems. It will beat out the productivity, which is the ultimate purpose of using computers.

Security makes it difficult

Various new web frameworks have come up which allow easy AJAX integration and build sites quickly. However, if the different vulnerabilities are considered, it is not easy any more. Consider the cross-site scripting, cross-zone scripting or the new dangers of JavaScript.

Security does not figure in many applications as one of the primary requirements. Either the client is not very interested or even if i is considered its cost might turn it into a good-to-have feature. Many a times, a project starts with a reduced scope where the security is not urgent and is ignored. However, the project evolves with time and then it is more difficult and expensive to make it secure. Today, Web 2.0 is headed that way.

Solutions?

Disabling JavaScript is the instant reactive solution to this problem, however it not practical. Today scripting is ubiquitous. The solution lies in preventing hacks not avoiding scripting. Incorporating security in the JavaScript design involves changing its model which entails changing almost every web application today which might take time. The solution has to be a two-way approach – a policy based solution and an effort to improve scripting environments.

Clients, designers, developers, browsers – the whole industry should accept policy based decisions to avoid hacking. It would be perfect if there would be a way of differentiating between good-intentioned and malicious code. Maybe there can be certifications to certify non-malicious code. Ted Dziuba presents a novel approach, though a little critical, by differentiating between a document and an application.

Indeed, JavaScript is useful when the main purpose of your work is an application. When you are presenting information, however, there should be no JavaScript between the user and that information. As I said earlier: we as developers have an obligation to the rest of the internet to classify our work as either document or application. So, the next time you think that having your entire web site as one page with AJAX controls, please, think of the crawlers.

Software creators should focus on security along with the quick and easy rush. Make the web site secure and safe along with making it dynamic, interactive and flashy.

The industry needs to hold back a bit, focus on the JavaScript vulenrabilities, prepare for it and then get gung-ho about it.

Technorati tags: , , , , ,

Copyright Abhijit Nadgouda.

Interview With Usability Guru

Usability guru Jakob Nielsen was interviewed (via Ajaxian) on usability and its relation to advertising.

One of the things that come out of it are the applications of AJAX.

It’s important to remember that most web sites are not used repeatedly. Usually, users will visit a given page only once. This means that the efficiency of any given operation takes a back seat to the discoverability and learnability of the feature. Therefore, interaction techniques like drag-and-drop should almost never be used on web sites. Instead, focus on showing a few important features and making them accessible through a single click on a simple link or button.

Some business sites that are used repeatedly include features for approximating software applications. Online banking comes to mind, and I can easily envision a design that enables the user to see the front or back of a check through an AJAX technique on the account statement page, instead of going to a new page.

Do we simplify or in fact complicate the interaction by adding advanced features? This is a question to consider whenever any feature is being implemented. Ideally, any feature should be implemented such that it supports maximum number of platforms and browsers without any third party plugins.

He also mentions the classic problem of doing technology for technology’s sake. It is important to realise that the software being developed is for the customer and no one else.

Remember: just because you love technology and advanced features, it doesn’t mean that your customers do. They just want to get in and out without worrying about your web site. So take it easy on the features.

I agree in entirety that adding features or programming tricks turn into just gimmicks if they are not targeted to be used. One of the best ways of identifying the required features is to identify their usage – who, why, how and when. If there are concrete answers to these questions that feature becomes an important one. Rest fall in the category of gimmicks. Every Web 2.0 company talks about AJAX today, but how many talk about using it for improving the website?

He also talks about usability and conversion rates. Usability is an important factor, it makes the user comfortable and that drives traffic. Advertising can get a user to the website, however the conversion to a frequent user will depend on the content and how usable the website is to consume the content.

Technorati tags: ,

Copyright Abhijit Nadgouda.

Get More Accessible

This post is not about the commonly discussed and basic accessibility issues. They are very well covered by the Web Accessibility Initiative (WAI). This is about adding the last straws to get closer to being accessible by doing a design with that intention.

Skip Links

Skip Links function as navigators within the web page being described. They are required so that a person can navigate through the structure of the page with minimal clicks. They turn out to be an issue of accessibility for those who cannot scroll or move through the page because of mobility problems. And they are also a usability issue for the users with less than efficient tools for navigation, like mobile users.

An classic demonstration is at the 456 Berea Street site. The topmost links fall in the category of the skip links which can be used by users to skip to a specific part of the page. Since these links become part of the design itself there are various ways of including them, one of which has been discussed by the Accessites article. It discusses a way of hiding the skip links from the normal users but making them available to the screenreaders or on demand. You can try it out on the site as Mike Cherim says:

I use an off-screen method, typically taking an unordered list and sending it a few thousand pixels into the darkness off-left — using the display property none should be avoided to ensure access to screen reader users. Then, one-by-one, employing a:focus (or a:active for IE users) in the CSS, I bring the anchors, not the list items, into view. In the interest of a best practice, I recommend locating them, when viewable, in the upper-left or across the top, giving them a good background and enough positive z-index in the CSS to ensure they stand out. An example of this is available right here on this site. Press Tab a couple of times to see the available skip links in action.

As you can see, on accessites.org the skip links are provided to jump to even different types of information like accessibility information. However, hiding the links falls into the arena of usability which might not approve of it. The article very nicely highlights the importance of skip links and why they should be handled by the developers today to compensate for lack of standardisation in the user agents (browsers).

Whichever way they are included, skip links provide the last mile of accessibility. The fun part is that they are not at all difficult to implement. All they need are anchor names or bookmarks as they are called.

CSS for multiple media

As part of the theme development Cascading Style Sheets should be developed for multiple media – screen, print, aural and other recognized media types.

Alternate High Contrast Theme

Providing a high contrast alternate can make your site more accessible to visually challenged users. Again, using 456 Berea Street as an example, the link in the top right corner – Switch to high contrast layout – does that. For some reason this option is not employed in many sites when this is the most direct and fruitful way of making a site accessible.

Implementation In WordPress

Since WordPress is a popular blogging tool (and one of my favorites), lets use it to see how we can implement the discussed points.

The skip links themselves are nothing but links to specific parts of the page which, as mentioned earlier, can be implemented by using HTML links. They should typically be placed in a location which can be accessed without any additional effort, something like top-level navigation. Once the different parts of the page are identified, mark them up and change the theme to include the links, e.g., header.php can be modified to include the skip links.

WordPress supports CSS to the fullest extent, and supports CSS for media other than the default for screen. It is only a prerogative of the designer to provide it, WordPress does not cause any hindrance.

Switching to the alternate high contrast theme can be provided using the popular theme switcher plugin. The theme switcher temporarily changes the theme using cookies. You can modify the wp_theme_switcher() function to provide link to the alternate high contrast theme. Of course a high contrast theme has to be developed first to be able to implement this. This is something that probably designers should practice, provide a companion high contrast theme along with every theme they develop.

WordPress accessibility has been studied a lot. Here are some good resources:

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Follow

Get every new post delivered to your Inbox.