*OA – The Different Web Programming Paradigms

Update: A Spanish translation of this post is available, thanks to Adrián Moreno Peña.

Web has been the apex of networking, both from software and social perspective. The extreme software networking has turned Web into communication, social networking, collaboration, publishing platform as media and application programming platform. However, as mentioned earlier, Web was originally designed for documents as a way of information sharing. To be able to do what we wanted to do with Web, we had to make it programmable.

Programming The Web

The Web today is programmable. It can be programmed for various purposes, some of which we have mentioned in the beginning. This programmability is derived from the mature generic software programming principles. We need to have an architecture, a design and an implementation. The generic software programming has evolved through a lot of changes, some of which can qualify as improvements. From the programming styles like procedural programming the focus has moved on to architecture and design to produce techniques like OOP and the new Agile and Model Driven Architectures. On similar lines today the Web is full of upcoming architectures and approaches. Lets mull over some of them here.

Services

The concept of service was created to emphasize on loose coupling and a client-server relationship. The pre-Web software was usually tied to the hardware and the associated platforms. Web, being so open and ubiquitous, cannot afford to do that. Web was meant for sharing, irrespective of these restrictions. Hence came up the concept of service. A service is a function with a purpose, serves all the clients without any restriction on their implementation details.

Service Oriented Architecture (SOA)

Such collection of services were exposed that the clients could avail of and termed as Service Oriented Architecture (SOA). These services communicated with each other, some collaborated and some were standalone.

To be able to do a handshake, the clients had to obey the protocols mentioned by the service. These most popular ones were XML-RPC, SOAP. They focused on abstracting Web for applications and domains. A different approach was taken with REST which focused on using the Web as it is, by following its basic principles.

The advantage of SOA was that now businesses could choose between services without being hindered by technology or organizational boundary. Neither the definition nor the specifications of SOA were limited or dependent on Web. SOA could allow interesting mashups and integrations. SaaS is completely based on this and has been able to bring analogical outsourcing concept in businesses.

However, there are some key disadvantages of this approach. The biggest being that in an effort to be platform agnostic and portable, SOA is buried under a load of specifications. Increasingly it is getting difficult and costly to be able to comply with the protocols and talk to a service. Another disadvantage, which need not be severe sometimes, is that the services are not discoverable. Knowledge of the services is required to be able to use the service which mandates a directory of services. Since Web is boundless by nature, it is impossible to keep such a directory. This makes SOA less reachable.

Web Oriented Architecture (WOA)

To make SOA lighter and more popular came WOA. It is essentially a subset of SOA which recommends using REST over heavier counterparts like SOAP. The philosophy of REST to differentiate between network programming and desktop programming makes it simpler to use for the former.

WOA is more customized for Web by including REST. And by specializing it can strip off the heavy abstractions that make you all-inclusive.

Resource Oriented Architecture (ROA)

Here comes a radical approach, well, radical from the SOA perspective. Alex Bunardzic introduced ROA. While WOA is conceptually still SOA, ROA is a rebel for a good reason. Alex points out that the concept of services might not apply to the Web. As mentioned earlier, services cannot be discovered and it is not possible to maintain a catalog. This is where it goes against the Web, ROA believes that Web is exploratory by nature.

Because of the uniqueness of the web as a medium, the only abstraction that does it true justice is resource. The web is a collection of resources. These resources are astronomically diverse, and it would be mathematically impossible to maintain any semblance of a reasonable inventory of web resources.

Each resource on the web, no matter how unique and complicated it may be, obeys only one protocol. This protocol has three major aspects to it. In no particular order, these aspect are:

  1. Each resource knows how to represent itself to the consumer
  2. Each resource knows how to make a transition from one state to another state
  3. Each resource knows how to self-destruction

ROA is more of a paradigm than an architectural approach thats considers resources to be the elements of Web. The key part however, is that they can be discovered, and once they are discovered they can represent themselves. There is no requirement of prior knowledge of the resource to start a conversation, as against knowing capabilities of a service in SOA. ROA is completely based on REST and basks in its advantages – simplicity, minimal technical requirements and URI for every resource. The use of the basic elements of the original WWW makes it easy for a resource to talk to another resource.

The only disadvantage I see of ROA is that it is well defined for Web. Although there can be analogical implementations in other areas, like SOA it is not conceptualized on non-Web platforms. There are new developments happening in this area, but it is still not as mature as SOA.

Epilogue

If analyzed, all these focus on having a standardized interface. ROA is simpler than SOA and uses the hyperlinks effectively to reach a wider base. But whether that is a requirement will be determined by the business need.

As a software developer what is in store for me in all this? Well, these paradigms are about to define the direction in which Web programming will head in the future. The one that dominates will survive. However to be dominant it will have to prove itself to be loyal to Web and loyal to businesses. If both co-exist, it will be critical to identify applicability of each of these. If not, then there will have to be preparations to handle its disadvantages. Either way, these will affect the businesses in which they are being used. And with Web playing a very important role today, this impact will not be ignorable!

More readings on this:

Technorati tags: , , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 2 Comments »

Should The Web REST Its Case?

Today the Web is being treated as an application and messaging platform, as a publishing platform and as a medium. However, the initial intent and hence the design of Web was to host documents and make them available to everyone. Here is an excerpt from the summary of World Wide Web by Tim Berners-Lee:

The WWW world consists of documents, and links. Indexes are special documents which, rather than being read, may be searched. The result of such a search is another (“virtual”) document containing links to the documents found. A simple protocol (“HTTP”) is used to allow a browser program to request a keyword search by a remote information server.

The web contains documents in many formats. Those documents which are hypertext, (real or virtual) contain links to other documents, or places within documents. All documents, whether real, virtual or indexes, look similar to the reader and are contained within the same addressing scheme.

In a nutshell, the Web was intended for documents so that information can be shared. The design of the Web and underlying techniques like the HyperText Transmission Protocol (HTTP) and HTML target these hyperlinked documents and exclude the modern connotations.

Protocols

SOAP and XML-RPC

To be able to do more with the Web, a layer of abstraction was introduced. This layer introduced new protocols and data structure formats and new rules to abide by. XML-RPC is a product of this attempt and later evolved into SOAP to handle enterprise scenarios. Lets club both of them together for our purpose. The purpose of these protocols was to ensure communication between disparate machines with disparate platforms with disparate programming environments. That they did to the fullest extent. The utilities were offered as services, which the clients could use by requesting using the protocols. SOAP has now evolved to get more and more inclusive and stricter and tedious. A lot of specifications were developed which caused the effort and the cost of being able to use a service to climb up.

There are two problems with SOAP. One was that as Web was being used for all kinds of things, a lot of which were not enterprise or corporate. SOAP started getting oversized and bulky for them. Secondly, SOAP uses POST method of HTTP. (HTTP provides two commonly used methods – GET and POST. GET lets you retrieve information and provides a Uniform Resource Identifier (URI) for that. This URI can be used as an identifier for that information or resource. For using POST, a package has to be sent to the web server, a simple URI does not work.) Using POST meant SOAP had to do away with the URI and the associated basic benefits of simplicity, convenience and caching.

REST

So came in a new perspective REST – Representational State Transfer. REST, coined by Roy Fielding in Architectural Styles and the Design of Networkd-based Software Architectures takes an approach contrary to SOAP. Instead of building over the basics of Web, it tries to optimise it as it is. It uses GET to request information, and idetnfiies every single resource with a URI. This URI can now be used by anyone anywhere, a simple string that can identify and locate a resource on the Web. Not additional protocols other than HTTP and use the URIs that form the hyperlinks. Keep it simple and keep it accessible, very much goes with the ideology behind the WWW summary.

With the emergence of Web 2.0, there was search for an easier and open paradigm for using Web, which was found in REST. I am with REST for now. However I am not sure if it will be an over simplification for some problems. Only time will tell!

The RSS Blog has a good illustration of all the three protocols. I sometimes wonder if a combination of these protocols would provide a better solution in some cases. A lot of discussions end up in flaming and finger pointing. However there are some good thoughts on these:

Non-participation by W3C

It would be better if the World Wide Web Consortium (W3C) participates in creation of these paradigms and protocols. W3C is the authority and can play the role of keeping the integrity of basic principles of Web. Existence of multiple protocols is a bigger problem than just development inconvenience. It can divide the Web into incompatible parts which will ultimately be the failure of WWW.

Technorati tags: , , , , ,

Copyright Abhijit Nadgouda.

Posted in design, web. 3 Comments »

More Interesting Uses Of WordPress

Lorelle points to a detailed article on using WordPress for online magazines. Max says:

WordPress, however, is an extensible website content management system that can be used to run magazine-type websites. Here are steps I took to turn this online magazine on Cebu from a blog into its current presentation.

So very right, WordPress is being used as a CMS. You can see sites like The Blog Herald, that are inherently news sites. Supporting multiple authors, roles, scheduled publishing, custom fields, comments, static pages, multilingual support, syndication support are some of the capabilities that make WordPress suitable for this. The feature that takes the lion’s share though, are categories – multiple hierarchical categories. These categories can be used to implement so many features that they have become a must-have feature. The terrific template system along with this can work magic.

Along with the stuff mentioned by Max, WordPress automatically handles the volume and issue management. Being a blogging tool, the posts are archived chronologically, which is usual way for handling issues and volumes. Even if not, WordPress provides mechanisms for retrieving posts for any period, literally any.

WordPress is excellent, however, as a personal publishing tool only. For anything more than that WordPress can be found to be lacking as a CMS. The basic content type it can handle is a post, which can be packaged as an article or news item. For radically different items, say, events, post would not be an apt structure. In case you create a new content type, it will not be completely supported by WordPress, e.g., search will not work for that content type. Additionally, there are certain publication features that are missing:

  • customizable and extensible workflow: the default workflow is very two step and limited. Usually, in practice, magazines include multiple roles in the workflow – jouanalists, editors, publishers. Sometimes, different types of articles are to be edited by different editors.
  • version control and rollback: maintaining versions of the drafts, and ability to rollback to an older version.
  • tracing: tracing actions that can be audited.
  • search: the search is partially effective. It neither supports relevancy and does not search across pages, excerpts or custom fields. There are certain solutions, but still not what the industry expects.

Besides these, WordPress is an ideal publishing tool. The best thing about WordPress is it lets you control not just the content, not just the style, but both. Using this combination you can develop a theme that is accessible, usable and supports standards. And it has been used for websites of various types:

Technorati tags: , ,

Copyright Abhijit Nadgouda

Consistently Different

Luke Wroblewski at Functioning Form discusses the most common and prevalent dilemma in software design – tension between consistency and content. Luke gives apt examples to show that same user interface can underperform when difference has to be illustrated to the user. However, consistency is something that is tightly integrated to usability, it is something that makes the application easier to use. That is why the system should be consistently different. Lets define consistency before we can elaborate on this.

What Is Consistency?

Consistency is the ability of an application to hold its interface and response across time. Consistency comes out of transforming discovery into expectations by the user, and if it complies the user’s effort to use the application reduces. A popular example is if you find the search box on the top right corner of a web page, you will start expecting that the search box will be on the top right corner on all pages across the website. Do you expect that F1 will bring up help for a desktop application on Windows? Another scale is where for branding consistency has to be maintained across websites of a single company – the same logo at the same place, the same resolution, the same primary colors, etc., etc., etc.

Consistenly Different

However, what was missing out of this definition was that consistency can be for an atomic element or a composite element. Consistency across different atomic elements that form a composite element can result in consistency for the composite element. Lets look at a smaller scope for the discussion, left click on a URL link takes us to a different page and left click on a mailto link will open our default email program. This difference in response is accepted if this happens across the entire website. In fact now we expect this on all the web sites. If the response for these left clicks is consistent, the website can be consistent.

In other words, definition of consistency of the website included treating different elements differently. Which is what I call being consistenly different. However, it is very easy to go overboard with this and cross the delicate line between being consistently different and inconsistency. To design consistency it is of prime importance to identify the different content types. As part of the requirements analysis, the common behavior can be identified and the specific behavior can be differentiated. Whether two content types should be called different or not should completely depend on whether the user is required to see them differently. This can result in consistency across all content types and specificity for each of them. An example is, the HR reports and financial reports that a CEO reads can be different but both of them can export a PDF version. The PDF version is consistent across all reports, however the financial report can allow some complex financial calculations that the HR report will not.

The design should also include standards. Standards are a way of being consistent across an entire domain.

Consistency for a software application is a result of consistency across all the elements and consistenly treating the different content types differently.

Techorati tags: , ,

Copyright Abhijit Nadgouda.

Web Design – Art Or Engineering?

This has been the single most troubling query for me since I have arrived in the Web arena. The systems programming and application programming are more tilted towards the engineering aspect, applying engineering basics for designing the UI (User Interface). However, because Internet is being treated as media rather than a platform, art has more scope here. I have seen the Photoshop guys and the Content guys at each other’s neck to own the design. Who gets the credit?

Content or Graphics?

My engineering background biases me towards that. On the web Content is the King. Give importance to the content identification, information architecture, user profiling and then design. User a Content Management System. A web site should support the standards, should be usable, accessible (at least to its intended audience) and more importantly secure. But it cannot be just this! In today’s competition for the top birth, the graphic design plays an important role. Users are not ready to go with anything that is drab and already done. It has to be fresh, with new ideas. And it has to be usable, accessible – wow am I going in circles?

Tommy Olsson of Accessites.org analyzes the two approaches two designers take – Visual and Structural and attempts at a possible solution. The primary difference is that the structural design will flow with the content, whereas the visual design will end up filling up spaces with content. The structural approach can end up looking looking boring and too engineered. Whereas, like Tommy mentions, visual approach can put less focus on the usability and accessibility aspects. He goes on to speculate

Why, then, is the visual approach so much more prevalent than the structural? One reason is that most people think visually, especially when it comes to web design. Many also find abstract thinking very difficult, and abstract thinking is required for the structural design approach. Furthermore, visual designers believe that starting with the content will impose limitations on the design possibilities. The main reason, of course, is most likely that many designers use WYSIWYG tools like Dreamweaver or FrontPage, which are design-centric to the extreme.

That is the key, the either parties end up using tools which are design-centric to the extreme. The visual designers see content as an impediment and the structural ones will view graphical design as a restriction. One thing is sure that today both are important.

Tommy wonders if both the visual and structrual designers having equal in HTML, CSS, usability, accessibility and graphic design will design visually identical designs. Practically, it will be difficult to find this, and even if it is done, the design will change depending on whether you focus first on the graphics or first on the content. Ideally they should be done by the corresponding domain experts and then both should be blended together.

Both

Will it not be great if both of them sit together and sort out the issue? Instead of stubborn designs on both sides, can there be design ideas and a brainstorming session to materialize the ideas. Both parties can contribute in each other’s designs from their perspective. It can become imperative, in fact, in cases where graphics is part of the content. A case to consider is when putting up images the art will focus more on colors and textures, whereas the engineering will consider impact of the images on the size and performance. Which of these has more importance probably depends on the type of the website and the type of the target audience. I would tend to invest in structural approach when designing for a news paper, however, the weight can be heavier for the visual approach when designing for an art gallery.

Ultimately, the resulting website is a blend of both, so they have to be treated together and approved togther. There is no one-upmanship. Web design is both art and engineering, and what the user should see is a balance between the two.

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Design Efficiency

Aza Raskin at Humanized has written an interesting article about design efficiency. He provides a quantitative way of measuring efficiency of the design.

Of course, efficiency cannot be absolute, but it is an indication enough.

The most important property of efficiency is that it lets you know how you are doing in the grand scope of things. If an engine has an efficiency of 10%, you know you can do a lot better. And if an engine has an efficiency of 98%, no matter how brilliantly you design, you won’t be able to improve it much.

Efficiency lets you know when you can stop looking for a better design.

Efficiency can tell you when you need a new inspiration.

A benefit of the efficiency is the focus on big picture and to be able to consider all the different aspects, and consciously ignore the ones that don’t matter. It is extremely important to consider only the factors that fall within the scope, otherwise it leads to the wrong direction or a wastage of effort. It is also important to measure the efficiency from the user’s perspective.

Aza includes on the quirks I love to hate – the desktop:

The Desktop.

Think about it: if you want to write a letter, how much of the letter do you get written on the Desktop? None. If you want to look something up in Wikipedia, how much of the searching do you get done on the Desktop? None. If you want to perform a calculation, how many numbers get crunched on the Desktop? None!

The time you spend fiddling with your Desktop to get where you need to go to get your work done is entirely wasted. You get no work done on the Desktop. It has an efficiency of 0%.

Clearly, there is a lot of room for improvement on the Desktop. A lot. We think we have a solution so keep your eyes peeled next month.

In my opinion, the desktop should play the role of an interface to the computer. Whether we are dealing with applications or activities is something that the desktop should convey and should be customizable according to the user’s expertise.

Efficiency is important, it determines how much to design or sometimes enables innovations that break the conventional ways and leads the pack.

Technorati tags: ,

Copyright Abhijit Nadgouda.

Activity Centred Design

Don Norman proposes activity centred design as a better approach for designing. The logical methods of organizing into taxonomies and classifications does not support activities, which is what users carry out while using software.

Taxonomic structures are appropriate when there is no context, when suddenly needing some new piece of information or tool. That’s why this structure works well for libraries, stores, websites, and the program menu of an operating system. But once an activity has begun, then taskonomy is the way to go, where things used together are placed near one another, where any one item might be located logically within the taxonomic structure but also wherever behaviorally appropriate for the activities being supported.

Provide The Context

However it is unfair to say that logical reasoning is the culprit. It is an oversight on the part of designers. It is important to realise that users carry out actions on a piece of content after it is discovered. Taxonomy is very effective in providing navigation and discoverability of content. However, once found, the software should provide the context to the user. Supporting the activities is part of providing the context. The context is a direct result of the purpose of the user to use the software, visit the website, use the car dashboard or visit the city library.

Once the purpose is established, the context(s) can be identified. Although it is entirely user centric, more factors contribute to the context. If it is the software, then the layout design, the usability design, the usage patterns and more importantly tracking the usage history are part of providing the context. Some of this can be gathered when the requirements are being identified, and some has to be built when the software is being used. Usage patterns can be developed for certain users and corresponding contexts can be provided.

The Desktop

Will it help if the desktop is activity centric rather than application centric? How many end users want to know and want to be worried about installations and uninstallations of applications? The common man uses computer to carry out activities. Each type of the user has to carry out different activities, the home user, the employee, the student, the teacher, the developer, the marketing personnel – all have different use of the computer. Why is the desktop same for all of them? Why is the desktop not engineered according to their activities?

A solution is to change the interface – the desktop manager. It should carry activities like browse the internet, check email or write a letter rather than shortcuts to applications. This tells the user to carry out his/her tasks rather than start and close applications. The activities and underlying tasks can vary per user.

The challenge here is to take something as generic as a desktop manager and specialise it for a user, and still stay generic at the core. This requires deep understanding of the users types and their possible activities. It is also important to not limit the user to certain functionalities, but to be flexible so that it can be customized per user.

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Simplicity and Side Effects

Alex Bundardzic has written a nice piece on achieving simplicty when developing software. He also points to the post on 37 signals on similar lines. Like other times, I have commented on Alex’s post, but later I thought that this topic warrented a post here.

What About Flexibility?

However I feel it is a incomplete without its possible side effects. The biggest one being lesser flexibility. Of course flexibility itself can be the evil sometimes, but design of any software is (or should be) done with enough flexibility. One of the most classic divisions in the Linux world is over Gnome ‘s simplicity and KDE‘s flexibility. Another instance is the popular messenger gaim opting to reduce the number of preferences to make it simple.

Reducing the necessary flexibility can kill the application and sometimes its usability. There are logical sets of customization units that can need to be equally flexible, e.g., to let the user select a font, the change of typeface, font size and the decoration all has to be provided in a word processor. Howerver, this flexibility might not be necessary for writing posts on a blog as the presentation is controlled by the theme.

Solution?

What is required is to keep a balance between enough flexibility and possible simplicity (through usability). Beyond this balance these folks are ready to wage a war with each other. One of the ways I like to achieve this balance, in case of preferences, is to provide default values. Allow the user to make fewer choices by providing default values for things (s)he does not care about. So that the other (s)he will have more or lesser choice. Using the combination of flexibility and default values you can provide different levels of choices to different users.

Another way can be to provide themes of preferences that users can choose. Something similar to desktop themes or blog templates and themes. The user profiles should be used to identify the different combinations of choices to package as a theme.

Documentation A Must

I think this is the biggest effort cost of providing flexibility. Without supporting documentation it is just another bundle of controls and dialogs for the user. Usable and extensive help can provide the necessary education to the user about the application.

I am a believer of the Less Is More paradigm, but it is imperative to decide on how much less. This will by and large be dependent on the user profiles the application is expected to serve. Over the long term a software with the right formula of flexibility, simplicity and functionality will be successful. However, it is important to see that the formula will not be the same for every project, it will vary and will depend on lots of factors. It should result in a software as simple as possible with enough flexibility to provide the expected functionality.

Technorati tags: ,

Copyright Abhijit Nadgouda.

Get More Accessible

This post is not about the commonly discussed and basic accessibility issues. They are very well covered by the Web Accessibility Initiative (WAI). This is about adding the last straws to get closer to being accessible by doing a design with that intention.

Skip Links

Skip Links function as navigators within the web page being described. They are required so that a person can navigate through the structure of the page with minimal clicks. They turn out to be an issue of accessibility for those who cannot scroll or move through the page because of mobility problems. And they are also a usability issue for the users with less than efficient tools for navigation, like mobile users.

An classic demonstration is at the 456 Berea Street site. The topmost links fall in the category of the skip links which can be used by users to skip to a specific part of the page. Since these links become part of the design itself there are various ways of including them, one of which has been discussed by the Accessites article. It discusses a way of hiding the skip links from the normal users but making them available to the screenreaders or on demand. You can try it out on the site as Mike Cherim says:

I use an off-screen method, typically taking an unordered list and sending it a few thousand pixels into the darkness off-left — using the display property none should be avoided to ensure access to screen reader users. Then, one-by-one, employing a:focus (or a:active for IE users) in the CSS, I bring the anchors, not the list items, into view. In the interest of a best practice, I recommend locating them, when viewable, in the upper-left or across the top, giving them a good background and enough positive z-index in the CSS to ensure they stand out. An example of this is available right here on this site. Press Tab a couple of times to see the available skip links in action.

As you can see, on accessites.org the skip links are provided to jump to even different types of information like accessibility information. However, hiding the links falls into the arena of usability which might not approve of it. The article very nicely highlights the importance of skip links and why they should be handled by the developers today to compensate for lack of standardisation in the user agents (browsers).

Whichever way they are included, skip links provide the last mile of accessibility. The fun part is that they are not at all difficult to implement. All they need are anchor names or bookmarks as they are called.

CSS for multiple media

As part of the theme development Cascading Style Sheets should be developed for multiple media – screen, print, aural and other recognized media types.

Alternate High Contrast Theme

Providing a high contrast alternate can make your site more accessible to visually challenged users. Again, using 456 Berea Street as an example, the link in the top right corner – Switch to high contrast layout – does that. For some reason this option is not employed in many sites when this is the most direct and fruitful way of making a site accessible.

Implementation In WordPress

Since WordPress is a popular blogging tool (and one of my favorites), lets use it to see how we can implement the discussed points.

The skip links themselves are nothing but links to specific parts of the page which, as mentioned earlier, can be implemented by using HTML links. They should typically be placed in a location which can be accessed without any additional effort, something like top-level navigation. Once the different parts of the page are identified, mark them up and change the theme to include the links, e.g., header.php can be modified to include the skip links.

WordPress supports CSS to the fullest extent, and supports CSS for media other than the default for screen. It is only a prerogative of the designer to provide it, WordPress does not cause any hindrance.

Switching to the alternate high contrast theme can be provided using the popular theme switcher plugin. The theme switcher temporarily changes the theme using cookies. You can modify the wp_theme_switcher() function to provide link to the alternate high contrast theme. Of course a high contrast theme has to be developed first to be able to implement this. This is something that probably designers should practice, provide a companion high contrast theme along with every theme they develop.

WordPress accessibility has been studied a lot. Here are some good resources:

Technorati tags: , ,

Copyright Abhijit Nadgouda.

Dynamic And Rich But Not Without Text

The beginning of Web had everyone coding their own HTML files by hand. There was no other way. Individual HTML files were written and manually linked in other pages.

Then came in the dynamic page creation. The whole Web 2.0 system, amazing blogging tools like WordPress, Content Management Systems and Portals have proven that today a website should be dynamic and rich. It is inline with fast-moving world, or fast-moving virtual world that we live in.

Here, dynamic means more than one things:

  • Database driven applications improve performance, security and storage of data. Scripting languages (PHP, JSP, ASP.Net, Perl, …) can then be used to dynamically retrieve the data and display it.
  • Template based page creation help in creating consistent (X)HTML pages. Templates can be used effectively to ensure valid markup creation. This has attracted attention with increased stress on valid (X)HTML.
  • User experience (AJAX) can be used effectively to enhance the user experience. It provides better interaction and encourages user participation.
  • Separation of data and formatting is a necessity nowadays with more than one channel of deliveries for content. Users can use their PCs, handheld devices or take a printout to read content on your website. Separation of data and formatting helps in providing the same content in multiple formats. This also gives a chance to keep the website design uptodate with current trends, by just updating the formatting without affecting the data.
  • Multimedia is now actively used over the Web. With websites being used for songs and increasing trend of podcasting, websites are not just about text any more. Technologies like Flash are used to provide “rich” experience to the user.

Even if we have justified this approach, they should provide alternative textual content for every non-text element. The output of each of these should also include the static HTML of the old days. Why? Two reasons – accessibility and search engine optimisation.

Web Was Made For Reading

It might seem a little outdated, but Web was made for reading. The browsers inherently support text, but require plugins for other technologies like Flash and playing multimedia. Even if you publish multimedia/images on your website, you should still provide alternative textual content which comes into play if the user does not have the plugin installed or if the images cannot be displayed or the user chooses to block them.

Adhering to accessibility leads to high level of search engine optimisation, which is exactly what Andy Hagan says (via ). The reason being that search engines can read only what is human readable. Content that humans cannot read or access cannot be read and hence indexed by search engines, which utlimately is the website owner’s loss.

Techniques For Alternative Textual Content

This is a core part of accessibility where the information is still accessible to the user, using the alt attribute. An alt attribute is provided for every non-text element, even the anchor element has an alt attribute. In XHTML Strict doctype, the alt attribute is mandatory.

Here is a good guidance for providing text equivalent for client-side image maps. There are similar techniques for applets and objects. longdesc attribute can be used to provide an alternate page which contains textual equivalent content of the corresponding non-text element.

There are some modern innovative approaches for making alternative content available. A modern approach to flash SEO has excellent advice for providing alternative text for SEO.

This is necessary because I think there has not been an uniform upgrade in technologies. While we consider multimedia to be core part of Web 2.0, search engines still read only text and browsers still need plugins for lot of technologies. Looking forward to more innovative ways.

More Reading

Technorati tags: , , , , , ,

Copyright Abhijit Nadgouda.

Follow

Get every new post delivered to your Inbox.