web3 - is anybody out there?

______________________
situated social web
...

web3 medium - myAvatar myLife

web3... continued
______________________
what next?
Just how is my Web3 vision going to take root in a space where, as I said before: anything more complex than a light switch is out and so many “game changing” venture claims fail? The answer is simple – we are already half way there and I'm just gauging an existing trend.
Clear signs exist that a more personalised and bespoke future of Social Web is near. An aesthetically richer medium capturing style, creativity and the compulsion to express more than in-your-face comments and slapstick pics. Apparently, the fastest growing SW network at the moment is SoundCloud (http://soundcloud.com/). It is just like the original MySpace on steroids - encouraging users to record or create music and sound, share it with the world, building communities in the process.
______________________
have a look at this; but I can't be bothered sorry
Since I believe that an underlying desire for richer profiles exists, will SoundCloud be a killer app? I don't think so, the narrow-band interest communities are old hat now – nerds calling out ad nauseam “have a look at this” will never make a social scene. That's been tried to death and represents nothing more than Flicker or YouTube for sound – not a community but simply a space to dump stuff. Most likely it will be a nice gadget popping up on a wall of every budding DJ in your crowd. Another reason for SoundCloud to fail is the lack of technical prowess. For one, I've never experienced a run without a number of errors popping up, but more importantly the user experience is just so poor e.g. the sound cuts out as you browse – surely it could play on until you select another track.
Frankly, Facebook's “listen and watch what your friends are doing” is likely to have more grip once the initial flood of outcry from users reluctant to accept new features and changes dies down. The recently introduced timeline, on the other hand could prove to be a dud, even if it enhances the means for sharing one's life with “friends”. It smacks of the tedious – look at my snaps from the holiday, when we tend to yawn looking for a way out. The point I'm trying to make is that push is out and pull is in, a voyeuristic peek at your friends is OK, but slide nights faded to obscurity very long ago. There is a fine line somewhere between “jack ass” videos or one's intimacy exposed and the vain look at me calls.
______________________
my id (the antonym of ego)
The appeal of the secret or hidden side in all of us is constantly growing in the otherwise conformist world. The need to reveal it is parallel in every sense, even the public eye increasingly becomes a witness of the extremes. The disparity between the self and the mundane act we present everyday will collapse in time, but for now an outlet for the more extrovert facets needs to be found.
Few years ago I read a very instructive comment regarding SN identity: “we create a parody of ourselves on Facebook and before we know it we become it”. It resonated with me like a revelation, I was experiencing it in Second Life. Perhaps “parody” doesn't quite capture it and should be qualified by pointing at the need to make a joke of it, we simply can't face our fantasies easily. Fetishes or even quirks are not welcomed either, so perhaps the “joke” identity is often the only way to come out and reveal the other self.
This paradigm is far more obvious in Second Life - the concept of an avatar identity is a given and the meaning implied through it is widely accepted and readily understood. First time noobs seem perplexed, but soon someone tells them it isn't MSN and they would win more friends being a truck or a lizard then running around as a “boy next door” screaming “will someone cyber with me now?” I just didn't get SL until I made my first female avatar, I had plenty of opportunity to chat with people at the local pub in real life, so I'd seldom log just to be there (I started when I was studying it). Needles to say, before long my id (alter ego) became an obsession and the boundaries began to blur. In time I realised that I've recreated my identity against all the expectations of conformity in the world around me. However, I'll tell this anecdote in another post.
TBC

web3 – to be or what to be? that is the question

As years go by I ask myself: isn't it time to follow up on the essay I wrote back in 1996 - “the medium shapes the message”? Perhaps now that practically all of those speculations have been realised, finally the time has come. Incidentally I re-published it at the start of this blog.
In 2007 I came across a rhetoric posed by Richard Bartle “to be or what to be? that is the question” - since then it has been my sig line and an inspiration for all kinds of fantasies about the internet. It is only fair that I should consider this notion as a theme for contemplations on the future of the web.
Web3 is fast becoming the industry buzzword. The enabling technology is pretty much in place, all we need now is the next generation of killer applications. There are some good technical ideas talked about a lot, but technicalities rarely inspire more than Sci-Fi plots and this isn't what I intend. Without boring you to death, for the most part I'll attempt to adhere to the emotional aspects of what's to come.
______________________
technical future or failure – briefly
Obviously the most significant term we hear today is the “semantic web” and perhaps we may be finally ready to realise that rather old dream. Visionary technology always takes time. Ted Nelson proposed a very idealistic conception of hypertext (including the term) back in 1963, he was intuitive enough to call the project Xanadu. It is a very illusive goal - XML === semantic web and even earlier DKNF, also aimed to achieve the same thing with very little success. For those thinking of fashion, DKNF (Domain Key Normal Form) is a methodology described by C.J. Date in early 1970s prescribing the generation of uniform database entity keys across knowledge domains. Similarly to XML, the implementation of ubiquitous DKNF hasn't been achieved yet, even if some examples of “enterprise keys” (poor cousin of DKNF) and common XML tags adopted on industry-wide basis do exist.
Just like Artificial Intelligence, semantic web is a great promise, which encounters a brick wall upon any attempts to broadly generalise the rules. The problem of the representation of common-sense knowledge presents barriers which appear to elude practical approaches tried so far. Some philosophers point to the “embodied cognition” as a main reason for this difficulty. The constructs of knowledge and its representation are formed in relation to the human experience and our sense of the body. We have an intimate knowledge of the world around us, but capturing it gets ever harder the more generalised and universal this description is intended to be. In essence, as we tend towards the general, typologies and taxonomy replace attributes and a consensus about the significance of discrete entities becomes contentious in various domains of expertise.
Perhaps given a pragmatic approach, solutions of limited scope will be adopted and prove to be effective in providing some practical value for now. This outcome would be far from ideal, but more realistic considering the utopian nature of the problem. Other solutions could also eventuate in unanticipated ways, e.g. emerge from the chaos of user content tagging... we will see. If anyone is actually interested, I'm happy to help with knowledge engineering issues and elaborate further, but I don't wish to make this argument any more obscure or unapproachable now.
______________________
an intuitive prelude to the next web
Taxes and death are said to be the only certain future, but so is appetite once a taste for something new is experienced. Web2 or Social Web has given us the appetite, so we are ready for more and have some idea of what works and what is simply lame or just another 'me too' attempt at best. Web3, in my view, will be a stage when our avatar matures and gets a life beyond the juvenile bitching temperament so dominant online in the last few years.
Is that it? Just a few words: avatar, life and emotions. Yes and to the contrary, this is a huge ask. The future of the Web is you, me and our lives. Reality Mining is here and the companies behind it aren't likely to lose interest in developing the infrastructure and the services based on 'big' data. Web-cams, smart-phones, geolocation, real-time content, etc. may be good, but there is more to come and the next thing is very close even if it hasn't taken-off yet.
A vision of virtual identity is coming into being – avatar, emotional states, sense of presence, shared situations and experiences. These allures will inevitably continue to penetrate our existence in increasingly ubiquitous and pervasive ways enabling a sense of participation and the fuzzy rewards implied by such states. For now, the hardest part is describing a paradigm without suitable analogies or practical metaphors. Let me try without attempting to exhaust the possibilities.
Imagine presenting/communicating a scenario through a medium able to mush-up a real world representation with avatar performances derived from any given situation, all under your control. Another way to visualise this, might be a high fidelity fusion of Second Life avatars in a Google Earth Stree View context with sound, in a totally arbitrary collage of content from real and virtual space, performances and chronology of real-time or from a persistent store of past events.
I think there may be a few laughs in that, just like the incidentally recorded or intentionally set-up videos we watch on YouTube and it probably would constitute a great source of content to post there as well. Perhaps you have a few ideas too, feel free to comment.
______________________
simplicity rules
The greatest aspect of doubt in the above scenario is the notion that “easy” and “simple” always win on the Web and the more complex capabilities seem to die in the long run. The most common view regarding the success of Facebook over MySpace, previously the dominant SN platform, was the plain interface and limited possibilities in comparison with the open-ended style of the predecessor. This was largely true, in the days when blogs and MySpace pages were a new thing, both required a fair bit of creativity from the user. However, the great majority somewhat lacked in that respect and felt intimidated, even if it may have seemed that everyone was writing a blog or posting pictures and music tribute pages. In reality very few actually wrote anything and those people were strongly motivated by one thing – a chance to gain readers and ride the tide of wide media attention obsessed with the new phenomenon. Similarly, MySpace gave people a chance to have their own Web presence and an outlet for their aspirations and creativity.
When Facebook emerged, everyone got a chance to participate with very little effort and since the capabilities were so poor, nobody felt like a poor cousin to the more creative show-off crowd. The masses flooded in and at large it satisfied the need to share snapshots of their lives. Seven years on it is still growing, but we can't be deluded that a “wall” and few things added as time went by is the ultimate function set. Indeed, Google+ surprised me by providing such a blatant duplicate. Considering the earlier attempts to innovate the mediums (Gmail and Google Wave), I was naive enough to expect more. Popularity of Facebook is growing not so much because it is precisely what people want, but because the platform is so ubiquitous now that everyone else needs to join it just to post a comment on anything going on around them. Yet another factor is the age group spread, kiddies and the aged are getting in on the act too. What does it say about G+? We will see, but I doubt that many will switch. Perhaps it will have more impact on Linkedin than the target in its sight.
The Facebook login/membership/profile has a lot to offer as far as new SN possibilities are concerned and increasingly it is integrated into new portals, as well as allowing for external data to be imbued into its space via the apps API. These days, the prolific population and the extent of participation on Facebook constitutes a tempting proposition to plug into that crown instead of building a competing presence from scratch. Providing that the Facebook policy remains committed to this model and all intelligence suggests it isn't likely to be compromised if the anticipated company float is to ever get off the ground in a big way, there are a lot of possibilities for extending the FB virtual identity and profile in a number of ways.
______________________
virtual face-off
This is the most likely path towards a more complex participation model and additional capabilities being explored in an effective way. In the event that Google would succeed in wiping Facebook off the face of the Earth, the controversy of real name identity aside, the same if not broader opportunities would most likely exist. The vision of a next generation richer virtual identity is very likely to encompass all manner of entertainment forms our devices are and will be capable of. When SN, as we know it now, was born AJAX (Web2) technology was just a geek way to build dynamic content using HTTP request object in JavaScript, which didn't have a vernacular name. However, what proved to be the most significant step for Social Web and is about to become a lot more prominent, is not the technology, but identity. Ever seen anyone log-into iGoogle intentionally? Not very likely, it would be such a nerdy thing to do. Google struggles so hard to track and target user profiles and yet the opposite is true of FB and it is worth a lot more than my estimate, actually I better double it, perhaps double it every Christmas starting with 2011...
______________________
situated being and sharing
What sets this new Social Web apart is situatedness (yup most likely a new word 4u), the integrity of communication in the real world is approximately: 40% context, 40% emotions of body language and tone and only some 20% words. Current SN communication is devoid of the main cues we are so reliant on and the effect of this is obvious to anyone reading a “wall” or “stream” for the first time, even more obvious (by contrast) to the users of richer SN platforms like Second Life (SL). By no means am I suggesting Social Web could adopt SL or a similar platform, but the key aspects of situated social events and hangouts together with a richer sense of identity and individual style will inevitably penetrate our online and real life space.
My own experience of being Charlette Proto in an experiment lasting a few years of a 24/7 virtual life in SL is by all means an extreme example. However, it provided me with a lot more than the metaphors and analogies to visualise some of the likely aspects of a future social life and a heightened expectation of SN at large. Above all, it illustrated just how rich a virtually situated life can be and a possible explanation of how the naff angst and all this frustrated bietchin of today's SN is born out of the struggle to squeeze more of our life between the lines.
TBC

brainstorming primed for invention

______________________
conception scenario
In essence, I create in a three step process. First, I contemplate and research the context of the problem and consider possible ideas, this step rarely results in an actual design. Then, as I gain some distance from the initial task – generally preoccupied by something else, the answer dawns on me and I record it in a freehand sketch. Similar experiences are reported by the majority of people with extensive experience in design, invention and creating all manner of things. Beginners tend to understate the connection and the structure of these three phases and are likely to say that the idea just came to them out of the blue.
The pattern of these three stages: seeding, subconscious reflecting and the synthesis or discovery is worth focusing on in an attempt to identify a methodology for creative problem-solving. The task of defining an effective technique, especially in context of brainstorming scenarios, will readily benefit by harnessing that pattern and in particular the introduction of a priming/seeding stage. Ronald Finke (Finke et al 1992) describes this as “generation of preinventive structures”, a key stage of the Geneplore model - a methodology for collaborative creative design process.
Finke's (Generate/Explore) model prescribes a deliberate temporary suspension of professional expertise in order to maximise the emergence of unique solutions. Habitual application of expertise carries the implication of routine outcomes limited by mental blocks imposed by existing heuristics. Novel and original solutions, and invention in particular, require a fresh and open minded approach to the problem at hand and the consideration of the likely possibilities.
In my opinion, this is an ideal opportunity to exploit the synergy offered by brainstorming sessions instead of expectancy and reliance on the conscious avoidance of conventional thinking by people with deep and specialised field expertise. The procedure would consist of a priming brief presented by experts e.g. scope and usability objectives of the proposed project, followed by a period of reflection, which may involve other activities and finally a core session for the exploration of multiple contributions in a traditional brainstorming meeting.
______________________
left vs right brain
A core criteria in adopting a suitable methodology is actually a very personal thing. The critical guiding principle is the compatibility with individual tendencies in cognitive disposition. At large, people can be classified as left or right hemisphere dominant in their processing of mental tasks. A balance of both is a desirable trait in everyday life, but problem solving situations benefit from a fusion of the two extremes. Collaboration between people representing the opposite sides of the spectrum, generally approaches optimal results by combining the strengths of their respective predispositions.
A number of cognitive dimensions can be identified as characteristic of the left and right brain dominance. These include: decomposition vs holistic (reductionist vs systems view), sequential vs random, symbolic vs concrete (diagrams vs prototypes), logical vs intuitive, verbal vs visual and reality vs fantasy. On one hand, left hemisphere dominant people are more productive at decomposing problems into components and meticulously constructing the desired outcome. Yet on the other, the right hemisphere dominant people work in a less structured manner, tending to veer off on tangents and discover unexpected results. Creative problem-solving situations call for a combination of both groups in order to deliver innovative and disruptive outcomes.
______________________
our objectives
The dimension of innovation can be distilled as a continuum from routine craft to creative invention. The routine path is grounded in specialised insight and leads to predictable results with great certainty, but little or nil innovation. As the process tends towards the creative path with an increasing potential for discovery of novel and disruptive product, the confidence in reaching valid outcomes inevitably diminishes. For every innovation in the marketplace there are thousands of failed concepts we may never hear about, even if some of these possess ample characteristics of killer applications. Similarly, countless mundane “me too” products will never reach the proverbial 15 seconds of fame for anyone to take notice, regardless of the stated objectives; like aiming to change the landscape of the industry or other claims expressed at launch time.
The salient question is: how to increase the likelihood of desirable results on the unpredictable path of invention?
The big-picture answer of course isn't simply a design process issue, many other factors contribute to success and countless examples readily illustrate this. What would iPad's chance of success be, without Apple and its religious consumer loyalty born with the iPod and iPhone range. Even without brand equity as such – consider Google without the initial support at Stanford or Facebook launched at some two-bit school instead of Harvard. All three examples are very cryptic, so I'll forgo any attempt at a deep analysis beyond a few thoughts to ponder on.
Apple is a company specialising in easy to learn and a pleasure to use products without any user research. Google's success has more to do with the demise of Altavista under consecutive takeovers of the parent company Digital, first by Compaq and in turn HP, than its page rank algorithm, the speed or for that matter the proposed business and monetisation model – there wasn't one. Similarly, Facebook only began to look like money when MySpace was strapped onto News Corp. for a hefty 580 million dollars and slapped with a very ambiguous expectation of delivering the online market to Murdoch – a disaster considering the widely reported stats like 4 in 10,000 click-through rates on ads in SN.
The company and its culture is yet another aspect on the bumpy road to product success. We need people with the conviction and the passion that Steve Jobs had selling jelly-bean iMacs despite the dated bubble design in the days of sexy and slim LCD screens. When Jobs returned to Apple, then on a brink of bankruptcy, he found the frustrated Jony Ive who showed him the iMac designs shelved by previous management and so the company was salvaged, to top the profit stakes now. Can such a transformation be replicated? Certainly it can, but tweaking the knobs while making exaggerated claims is not likely to deliver the results.
The answer is: talent, audacity, open-mindedness and of course a suitable process.
______________________
take the world by storm
Give or take some prescriptions and the earlier priming session, a brainstorming event could closely follow the well known process and guidelines. Anything goes as far as the idea capture and documentation are concerned. Those comfortable with verbal or bullet point descriptions should be mindful that input from other group members needs to be integrated on ad-hock basis, even if the resulting lists aren't quite so orderly or as neat as intended. Mind maps should be considered for their inherent flexibility of structure. However, feehand visual representation provides the most fluid and open-ended means to capture the rich picture of the problem and solution domains and my “doodling and sketching” article may be worth reviewing.
The idea generation process, should diverge as wide as possible at the early stages. In the event that an emerging concept reaches a wide acceptance or various ideas appear to converge on a common theme, the process shifts into deeper (more specific) exploration of its entities and personas or actors, including all implied aspects of context situations. Each stage of the divergent “what if?” exploration should reach an optimal phase when different “stories” present common attributes (converge) and this may be documented in greater detail.
______________________
low fidelity prototypes
Lo-Fi prototypes provide a very rich form for exploration and capture of ideas. Paper, pens and scissors, post-it notes, Lego or even screen based contraptions; whatever the medium - prototype props will aid and empower the brainstorming process. By making a simulation of Use Cases more tangible especially for the “right brain” participants, prototypes extend the dialogue and motivate both different and deeper persona descriptions. I strongly encourage role-play and the use of props (Lo-Fi prototypes) with particular attention paid to the scarcest resources (see below) for each persona and situation, with a view to minimise the effort where it is feasible.
______________________
guiding principles
The core guidelines are: simplicity and reflection in action. BJ Fogg proposed a very convincing model of “simplicity” as a function of the scarcest resource in a given situation. The scarce resources include: time, money, physical effort, brain cycles, social deviance (unfamiliarity) or non-routine actions. Ideally a use case would present just one scarce resource, but more often a combination of of a few will need to be balanced out.
My own interpretation of Gary Rolfe's “Reflection in Action” (or Reflective Practice Model) comprises three questions posed by practitioners at each stage of concept exploration or production cycle. What if? So What? and What Now? In this form of analysis a description of the situation is obtained, which leads to scrutiny of the outcomes and construction of knowledge learnt in the process. The net effect of this approach constantly improves the product itself and promotes individual mastery of the practitioner.
Aside note: In context of workplace Performance Reviews and self improvement, the principle of Reflection in Action provides an opportunity to elaborate on the integrity of one's responses and the influence of the impressions arrived at in the process of asking the three questions.
______________________
judgement
Providing that ideas are elicited in the brainstorming scenario, electing the most promising candidates for further development and obtaining a consensus in order to reach a shared vision and a sense of commitment, is likely to present a formidable problem in itself. Competent design professionals should be able to address this dilemma by recognising and championing the merits of outstanding concepts, however, the validity of this methodology and the whole process is clearly in someone's hands or rather at the peril of their emotional intelligence.
______________________
handover
While not a terminal stage in absolute terms, the design and engineering members of the team would take over the project at this stage, introducing the regular delivery process e.g. Agile, Kanban, Lean etc. Further critiques and reviews of the product under development lend themselves to additional iterations in settings analogous to the described brainstorming sessions and benefit from the “check-in” of results against the originally stated goals and objectives.
______________________
why not?
If company culture and value structures are greatly at odds with the notion of unpredictability and trust placed on methods with a relatively low level of confidence, perhaps a more pragmatic approach of routine processes should be adhered to, forgoing the potential for invention and the goals of delivering disruptive products.
______________________
REFERENCES
Ronald A. Finke, Thomas B. Ward and Steven M. Smith, 1992, “Creative Cognition: Theory, Research and Applications”.
Another worthy read on the subject: "The Brainstorming Process Is B.S. But Can We Rework It?".

creativity in brainstorming scenarios

Fundamentally, the situation is that of a small team engaged in the process of problem solving in a commercial setting.
Brainstorming is an approach to collaborative idea generation. Its proponents emphasise the benefits of high levels in novelty, quantity, diversity, unusualness and imaginativeness of the identified solutions. (Finke and Bettle 1996)
The term “brainstorming” and its methodology were introduced by Alex Osborn in 1956. It is defined as a set of objectives, which a small team (ideally seven people or less) should observe as they take turns in a loosely organised and casual in nature meeting. These are: Defer judgement. Encourage wild ideas. Build on the ideas of others. Go for quantity. One conversation at a time. Be visual. The last two guidelines have in time been appended to Osborn's concept and countless other variations have been proposed since, including an anonymised computer based version, but these are beyond the intended scope of this post.
If you are not convinced you aren't alone and parodies aside, I'll attempt to outline some of the criticisms of brainstorming with a notion of “harnessing” the intended benefits of crowd based creative power through a conscious realisation of its likely pitfalls.
______________________
no such thing as bad ideas?
The main objective of brainstorming is the generation of multiple ideas in an unstructured manner looking at problems from different perspectives and allowing possible solutions to emerge in a natural and unconstrained way. However, typically it is employed to solve particular problems, rather than a means of exploring new possibilities. Additionally, as a result of conformance it tends to result in “groupthink”, which all too often, leads to adoption of low value, impractical or unrealistic solutions. (Finke and Bettle 1996)
The assumed openness to elicit untested ideas and explore a wide range of possibilities is implied by the prescribed freedom from critical comments and evaluation. In my opinion, this can be a weakness in the process. In brainstorm speak this is called: a “no bad ideas” environment. The rationale behind this prescription is: that there isn't a greater killer of invention than comments like “that won't scale” from one's peers. (Bernard and Summers 2010) Perhaps remarks like “that won't scale” should be acknowledged by all parties as a contextual fact in the consideration of the proposed idea. Terminating the exploration due to a single constraint would be premature, but I hate the thought that the above mentioned vocal person would be later assigned to deliver the execution of the flawed concept against her or his judgement, simply because the team committed to develop it further without hearing their expert opinion.
This guideline (defer judgement) is most commonly broken in real life and I may be encouraging it further, but being exposed to or rather supported by the breadth of a collective team expertise, is a well recognised strength in collaboration.
______________________
everyone can be creative
The assumption is that; given the right environment, an incubator with all the right conditions to stimulate ideation and making it fun - props, toys, drinks and snacks, a clearly stated free from constraints attitude - “no bad ideas”, a means to capture the ideas - whiteboard, pens and paper, cameras, voice recording etc., when every member in the group shares their inspirations given an equal chance, an invention is likely to emerge spontaneously. This idealistic view is widely accepted, but is it a realistic and reliable path to generate creative work?
Going by my experience, I can only say: sometimes and not very often. As an exclusive approach it simply represents a try-hard design utopia, if not a means to undermine the importance of roles traditionally defined as creative. After all, not everyone fixes their own car or builds their dream kitchens hands-on. Skills are a combination of both, the learnt technique and an inherent intuition we have for the subject of work we pursue, hence the expectations placed on brainstorming may actually be somewhat naive.
First, a number of personality issues quite at odds with brainstorming need to be considered; Pulling raw ideas out of one's head and expressing them for the benefit of the team might encounter some barriers, after all not everyone is an extrovert. Then we need to think outside of the box, when most people feel more comfortable with low risk regular routines. And finally, the sudden impulse/flash of creative insight is recognised as strongly associated with intrinsic motivation, which by definition is the opposite of the more common extrinsic motivation based on set goals, scores/gains and the recognition of value by others.
Further, we have the issues of productivity and tangible value outcomes, which need to be addressed. If the un-primed ideas actually flow, will other members of the team really be challenged to take ownership of the implied potential of this creativity or simply idle awaiting their turn or worse - freeze with apprehension of their own defeat. Rather than perceiving it as an opportunity to express their own ideas and views, many people feel quite confronted by the notion of addressing their peers. The 'productivity loss' relative to working individually, is well documented by various studies; evaluation apprehension despite the prescribed rule, social loafing as a result of shared responsibility and limitations of short-term memory resulting in forgetting one's inspirations while others express their own - to name just a few. (Finke et al 1992) The evidence of cop-out contributions by employees expected to come up with individual pursuits towards the common good is abundant and YouTube is a great beneficiary of those.
There is more, just look at various cultural values ingrained through education and life experience at large. For example, Confucius philosophy with deep influence on the Chinese, Korean, Japanese, Taiwanese and Vietnamese cultures, fosters hierarchies of mastery and a great regard for traditions, the past and its wisdom. This has a profound impact on the individuals' concept of professional practice, knowledge and its masters entrusted with responsibilities. In many ways, these principles are at odds with the notion of informally guessing or stumbling upon solutions, which brainstorming embraces at its core. Of course this doesn't stop Asian people from being good brain-stormers, but it certainly undermines the sense of respect and trust in the adopted results.
______________________
can we fix it and should we try to?
Apart from providing an aspect of fun at work, brainstorming gives us a personal voice. At a time when social culture at large strays further and further from stereotypes, the work environment suffers from a 'job hat' syndrome and the hat appears to be worn ever tighter. Therefore a personal voice is an increasingly important asset, a forum with an attentive audience is valuable in terms of feeling involved, empowered and responsible for reaching high level objectives.
Above all, brainstorming offers a means to suspend the routine approach to tasks and one's reliance on expert insight and instead, act in a freer or fresher manner. This indeed, is where we may discover the genius inventor within us.
Problem solving based on specialised expert insight provides a high level of confidence in both, productivity and predictability of results. In other words, if a problem is trivial and the desired outcomes are well known in advance, there is no need to embark on discovering anything new and a craft-like approach will be the best fit for the job. However, an expectation of disruptive outcomes or invention calls for maximum spontaneity in the creative process, proverbial “thinking outside the box” in order to reach optimal results. I feel that non-routine thinking presents a significant enough value in design to dedicate my next post "brainstorming primed for invention" to an attempt at making brainstorming really “work” in creative pursuits.
______________________
Ronald A. Finke and Jonathan Bettle, 1996, “Chaotic cognition: Principles and Applications”.
Alex F. Osborn, 1956, “Applied Imagination”.
Chris Bernard and Sara Summers, 2010, “Dynamic Prototyping with Sketchflow in Expression Blend”.
Ronald A. Finke, Thomas B. Ward and Steven M. Smith, 1992, “Creative Cognition: Theory, Research and Applications”.

web design idioms - the need

______________________
borrowed idioms
Every medium has its idioms. At first, they are borrowed via analogy from other mediums, e.g. TV archetypes evolved from cinema news-reel to make daily news bulletins, live radio performances were appended with vision to make various entertainment shows. With time, unique idioms emerge, for example, live reports with in-situ footage, soap operas etc.
The web started the same way, Gopher or console terminal like selection lists and book style single column text. By about 1996 (going by my own example), index card or tabbed notebook metaphors replaced the menu lists and margin columns were introduced to present additional content.
In time, multi column layout was adopted from newspapers, following the rationale that more items needed to be seen without scrolling the page. This metaphor allowed for advertising to make it to the top of the page (even the anti-design Google tried this for a short while). The experimentally inclined and academic theories continued the search for unique forms and structure, but just about all business continued with the 'me too' approach.
______________________
so what?
Does it still work? Or is it time to try a bit harder now that the technology is mature and users seem to be ready to accept more than the yellowed pages of last century's newsprint style? Life in the 21st century may have changed a lot, many people are not only surprised, but often shocked and yet the 'look' is drab and nobody seems to care. Even the loudest designs today are all retro and rehash, perhaps with a little twist of technological glare.
The short answer of course is YES, its about time! For those thinking: why change something that works – because of new technology, the evolution of user interface, e.g. small screens and touch interaction just to name a few. Small screens dictate a more dynamic approach to design and the touch paradigm really fails when content is crammed into columns fighting for the top slot.
______________________
what next?
The first thing to die has to be the fixed width. In terms of usable screen real-estate this was always a losers way to eliminate a few design unknowns. I'll be very pleased to forgo the needless scrolling of a narrow strip so many sites are currently limited to. The opposite applies on the phone - sideways scrolling.
Much of the navigation paradigm is heavily biased towards pointing and clicking with a mouse and certainly fails when occluded by the user's hand. Then there are severe dexterity limitations of fingertip selection on small screens of a phone. I prefer resistive touch screens using long fingernails, but that's not the answer - besides being limited to single point gestures. Swiping pages may be a nice gimmick for the iPhone folk, but we need to find better and more precise solutions before the endless swiping novelty wears off.
Other problems come to mind, but what would I talk about in real life? I also have some answers in the bottom drawer, however - design ideas turn into 'open source' when they go online.

the "designer paradox" - my approach

Design theory, in contrast to real life practice, rarely confronts one of its key aspects, namely: 'the designer paradox' phenomenon. Creative endeavour at large and design in particular faces specific and unique challenges in industry application. On the one hand, creative design is expected to encompass aspects of inspired genius and yet, it must meet the pragmatic constraints of deadlines, costs and above all client expectations. All too often, the task of resolving these conflicting objectives is compromised by issues beyond designers' control, for example, poor requirements specification or inadequate definitions of the problem scope.
So far, the challenge of mitigating these aspects of design practice has not been solved in a ubiquitous way. However, I believe that optimal outcomes can be realised through an informed approach. Adoption of the Systems View for both the problem and the solution domains, in combination with relevant Cognitive Psychology hypotheses, facilitates a methodology which empowers all stakeholders and minimises the ambiguity of the design process. This notion assumes that tangible benefits for project outcomes can be gained through conscious realisation of individuals' roles and their influence, in context of: problem comprehension, idea generation, interpretation and concept exploration. Further, my approach recognises/accepts the opportunistic design tendency (solving problems with low cognitive cost first) and aims to promote reflection-in-action (refinement through reflection on interim stages of the design).
The probability of reaching creative/innovative outcomes is an all together different matter and I'll attempt to address it before long.

on User Experience design

Opinions on a subject close to one's heart can be quite difficult to express. UX certainly is one I've procrastinated on for a long time and probably will never be able to capture in a short neat post. However, I feel compelled to start boiling it down hoping that in time a well distilled elixir of love will emerge. When unable to precisely define a notion of what one stands for, it is always best to eliminate the superfluous and the opposites from the spectrum of likely candidates for core interests and beliefs.
Rather than the “look and feel” of landing pages or navigation issues, which appear to dominate many UX debates, these days, transitions of interactive devices and mechanisms in user workflows e.g. content authoring tasks, are of particular interest to me. The nature of users' dialogue with the contraptions facilitating task oriented actions within the system is an increasingly important aspect, especially when we consider their impact on the extent of engagement in creating content.
Not intending to discount the significance of navigation, wayfinding and landing page design, I'll focus on the wizards enabling authoring, ongoing maintenance and organisation in ad-hock created content on the web from a user perspective. A few issues immediately come to mind and given time I'll attempt to attribute those to aspects of success and failure in context of user adoption and capability to evolve with changing needs. The cumbersome authoring process of this system (blogspot.com) and blogging systems at large, should provide ample opportunity to discuss failed workflows in contrast with later generation of Social Media sites.
I suspect that much of the failing User Experience can be directly attributed to system design based on code engineering tasks as opposed to usability principles and User Centred Design process in particular. This leads me to the role and the general fit of UX practice in business culture and an interesting paper by Zafer Bilda on this very subject. The report comprises of a number of propositions Bilda presented to a group of 15 UX practitioners in order to solicit their views. I felt motivated enough to respond to the issues raised myself and in part to address the answers he obtained from the group. Read the original paper and my comments on “Chris Khalil’s Musings” blog.

agile or fragile software development

Agile methodology is widely accepted and has been adapted to many specific applications. The motivation to develop “lightweight” processes for accelerated production of software, lead to the formulation of the very ambitious Agile Manifesto in 2001. 1999 was a low point for the software development community, not only did revisiting old code to hunt for Y2K bugs seem like punishment at the peak of the .com boom, but more critically – the process exposed fundamental weaknesses of the documentation heavy processes in the past. It was time for a disruptive change; the pragmatism behind a general reluctance to adopt new approaches evidently failed us.
The appeal of Agile was by no means a discovery of new techniques, we've seen or done it all before, but this time a methodology was elegantly distilled into a few pages rather than a book. The Agile Manifesto offered a convincing vision of preferred values and principles focusing on collaboration, responsiveness, the individuals and their roles instead of a rigid processes and documentation. Other benefits ascribed to Agile include: satisfaction through rapid delivery and high ROI (return on investment).
______________________
more agile
The informal and often cuddly nature of the face-to-face workflow has at times been vulnerable to parody in its less successful applications, but the wide acceptance speaks for itself. The long emphasised focus on the user and the problem domain is finally an integral part of the process and not an 11th hour contractual necessity. From a design point of view, this aspect alone opens up the scope for exploration of truly creative solutions. Shifting from the closed shop research of the problem space towards stakeholder interaction, enables the developers to go beyond delivering 'what the client asked for' to gain the insights which increase the likelihood of discovering innovative and disruptive outcomes.
______________________
but still fragile
But isn't all this bonding and hugging amongst stakeholders just as fragile as other partnerships we form in everyday life. Well, yes it is and either party may shoulder the blame. On the developer side, the integrity of the routine daily progress/problem updates can be compromised for various reasons, including the reliance on interpersonal relationships. However, the client representative role is far more vulnerable to failure. The mantra and reality of the partnership and the sense of engagement will inevitably lead to emphatic attachment to the notion of validity and value in the project without the safety net of ongoing checks and balances from the peers. Further, the client-side stakeholder isn't necessarily representative of the end-user or worse - the prospective user may for now be just a speculative 'persona' profile.
The methodology falls short of full coverage in a number of key scenarios of the product development lifecycle and delivery situations. The Agile stereotype of client – developer relationships, typical of the last century e.g. building business process software, are not universal. The above-mentioned vendor as a commissioning party and the direct to public deployment paradigms are increasingly common in today's market yet neither seems to fit the magical stakeholder interaction scheme. My deliberations on those two examples revealed the most glaring omission. The inherent rush to produce application code, due to the value placed on software as a primary measure of progress, appears to overlook the need to integrate the critical early stages of the process; research, idea generation, design exploration and prototyping.
______________________
what about design?
It is one thing to declare catchy principles like openness to changing requirements and regular adaptation to changing circumstances and yet it is an absolute mistake to embark on code without a design and prototype validation. Under those circumstances, the unexpected manifestation of changes and the necessary adaptations are almost guaranteed and consequentially would compromise the expected delivery timeline, ROI and ultimately result in poorly designed and unmaintainable code. Yes, the core aspect of Agile – iterations and the prescribed code re-factoring may address the latter problem to some degree, but surely not without further compromises of the primary objectives.
I'd like to entertain the notion that Agile methodology can be readily mapped to the design and prototyping stages, just as it has been adapted to other processes. However, besides the strong appeal of the core values and user-centric principles, I fail to see further complementary features. Perhaps it is time for the early stages of development to be specified as additional steps extending the Agile iteration methodology at a cost of diminished elegance and simplicity.
______________________
is there a fix?
Many designers agree that, the correct way to accommodate design in Agile is to define “an initial phase zero that takes a long-term and holistic view of the project needs before moving into agile process” [Bernard C. Summers S., Dynamic Prototyping with Sketchflow in Expression Blend]. A common approach, in real-life application, is to embellish Agile with attributes of a familiar or industry recognised process, for example, see “Kanban Development Oversimplified” or “How Scrum Induced Sub-Optimization Kills Productivity (And How To Fix It With Kanban)”. However, none of those exaples cover the User Centric Design model. Unfortunately, all attempts at pimping up Agile with 'process' simply water down and often compromise its core principles and create a patchwork of less appealing prescriptions rather than a crisp well defined methodology.

to app or not to app, that is the smartphone question

______________________
cool code in the cold
At present, there seems to be a lot of dispute about the appropriate technology to implement smartphone oriented RIAs (rich internet applications). The debate is especially valid in terms of the 'me too' rush to develop iPhone and iPad apps instead of mobile device optimised websites. This picture is muddy indeed when we consider decisionmaking behind the tendency to favour the native app paradigm and especially the iOS platform. Lets look at a similar credibility dilemma in the recent past - the emphasis on deploying Flash websites, against the better judgement of many internet programmers.
In the days when the 'suits' had no idea of what internet technology objectives were and many felt quite proud of this since they had secretaries to type and print emails, decisions were all too often left to the young 'propeller heads' with little experience and even less expertise. The technologists aware of the big picture in terms of network applications or User Experience seemed just too hard to delegate, especially when requested to “show something by Friday”.
The cool “look and feel” designers stepped in without hesitation. Many had Macromedia Director skills and felt quite comfortable with applications that didn't print and broke the fundamental interactive UI event behaviours. And so, the user got the 'look' in the form of “Wait.. loading Flash” and 'feel' trying to work out how to scroll the information clipped in a tiny fixed size box.
Luckily, by the time the legacy Netscape was buried and the browsers imitating IE DOM provided a nearly uniform client-side scripting environment, the industry had begun to exploit the dynamic possibilities of HTML in combination with HttpRequest object. The 'suits' got a new phoney moniker to talk about; “Web2” and the web became a hub of Social Networks. The ECMA Script committee was shaken up by Microsoft, Google and Yahoo developers who intended to adhere to a deliverable standard, forcing the committee to abandon the ES4 utopia and accept a more realistic roadmap of critical revisions to ES3, which bore the ES5 release in 2009.
The final nail in the coffin for Flash is being hammered in by Steve Jobs. The 'special' relationship between Apple and Adobe (now the owner of Flash) reached a high point when Jobs expressed the commitment not to support Flash on iOS.
______________________
ecosystems deja vu
For a while it seemed that we had the future direction mapped out. But suddenly, the suits began to notice the outstanding growth figures as Apple introduced the new status symbol on the block. The techs lamented – a PDA with GSM not even 3G, UI designers cried – the icons all have the same shape, women frowned – the fingernails get in the way and the calls drop. And yet, the iPhone boosted street credibility in a prolific way and at the same time proved to be the most disruptive technology in a long time.
Tempted by Apple's market success (despite of or more likely due to the device's high cost) and the hefty 30% margin on third party app sales and subscription revenues, the manufacturers are busy clambering up the greasy pole building their own ecosystems. Quite apart from my semantic objections to using the term ecosystem (the original meaning is a topic more alive now than ever before), the concept of proprietary device or system oriented marketplaces for digital products bluntly contradicts the principles of User Centric Design. The notion that consumer choice and their needs are the primary objectives simply falls by the wayside when you lock them to a proprietary procurement and service channel.
The choice between native apps and cell phone optimised websites (web apps) should be based on technical requirements and user tasks, but in reality it is anything but. For one, when a company director or CEO asks for an iPhone app, very few employees have the guts to ask if the decision was purely influenced by the 12 years old technology guru at home. A truly 'smart' phone screen could be a combination of 'gadgets' for live data feeds and links to commonly used web URLs, not unlike a fusion of iGoogle and the Chrome opening page. However, since the phone manufacturers suffer from amnesia when it comes to failures of locked-in online environments e.g. Apple's eWorld or the Microsoft Network, we have to re-learn the lessons of 1995. Thank God that Steve Jobs is on-board this time, even if his health isn't quite what it was in 1996 when he returned to Apple and scrapped the Newton MessagePad.
I realise that statements like this are very controversial and deserve a lot more than an off-the-cuff post. However, in the interest of brevity I want to limit it to a just few key points. So beside the “customer is king” argument, what is the major ecosystem problem?
The business culture relating to Intellectual Property law. Just look at Apple; with 250,000 apps on offer it is apparent that some of them may violate registered patents and Apple is in no position to verify the legitimacy of every offering it sells. So far, when an app becomes a subject of an IP claim Apple pulls the app from the store and in many cases seeks to be admitted as intervenor in the legal case. There are countless reasons for Apple to be involved considering the business size and immaturity of many developers behind these apps. The question is, can and should Apple's (Google and Microsoft are in the same boat) IP people deal with the growth in patent assertions by licensing companies and patent trolls? My guess is obvious I hope.
______________________
the silver lining behind proprietary technology
The native app development environments aim to streamline the implementation and delivery, as well as enhance UI presentation on the respective platform. These intentions are very worthy, but are they effective considering the need to deploy three or more app versions and engage multiple vendors? Microsoft is the last to join the Apple, Google and Nokia ecosystems and back-end clouds. The Windows Phone could become more important now that Nokia adopted it for their products. Even if this choice was strongly influenced by monetary benefits of the partnership, its technical advantages, if any, will constitute the difference the user may experience. Perhaps sharing a few thoughts about WP7 technology will help to illustrate my point.
Personally, I don’t believe that the new focus on the phone as an entertainment device, in preference to the earlier emphases on business/enterprise communications, is anythig but an iPhone panic. The new architecture appears to be aimed at single user games and trivial commerce apps. The Windows Phone 7 platform is based on Silverlight and xna for applications requiring 3D support. These two development paths 2D or 3D are quite distinct and don't allow for mid-stream migration. The single-tasking runtime environment (tombstoned when user switches to another app) is also limited to self-contained executables sand-boxed (exclusive memory and local storage of persistent data) aiming to enhance security and stability, but as a result severely crippled in scope. In plain English, this paradigm hardly looks capable of enabling the next big thing beyond Web2.
Further, just like the Apple, Google and Nokia ecosystems, Windows Phone is totally reliant on the Microsoft controlled delivery channel via the Windows Azure back-end cloud. At large, I dismiss the commonly expressed 'big brother' fears regarding cloud services, however the platform locked user experience objections expressed above remain.
______________________
clouds on the horizon
The cloud paradigm will no doubt be the basis for the next generation of RIAs and provide ample opportunities for new innovative network applications beyond Social Networking and spanning across all forms of devices we happen to have on hand. I believe that in the near term future, HTML5 will become the favoured technology for mobile oriented RIAs over deployment of multiple native apps. For now, the wish list priority focuses on wider HTML5 and SVG support.
As far as the more mature cloud application candidates for the smartphone boom - Microsoft Windows Live or Gmail and Google Apps/Docs hold a lot of promise and yet in business settings I’m still a little dubious at best. As expected, Microsoft manages to meet some formidable design objectives, e.g. 'simplicity' - least effort in adoption since that is the scarcest resource and yes the 'price' is right. But Google services provide a better example for a number of reasons; the claimed high adoption rate, more complete functionality and above all, a business model that doesn't compete with a core legacy revenue source. However, the monetisation model based on advertising isn't free from hindrances when one considers the targeting mechanism. Using context derived from seemingly private content constitutes an obvious problem for anyone with Intellectual Property concerns (clients’ or one's own).
Commercial contract based cloud computing (SaS) is another story, especially for companies culturally struggling with IT departments presenting a mind-field of 'yes men' too afraid to raise an issue or too lazy to address problems.
The recently introduced (June 28 2011) replacement for Business Productivity Online Suite (communication and collaboration tools for MS Office) – the Office 365, which now includes Office Web Apps allowing for online document viewing (including smartphones) and editing (albeit limited), at a reasonable price. This package will be a more attractive option for business startups than companies with legacy in-house data stores, which will no doubt find the migration a daunting task. Moving the data from office servers to the cloud provides countless management shortcuts; however, the process implies a substantial shift in the IT culture and of course requires 'big decision' to commit to that change. Further, there are additional costs and effort accompanying the deployment e.g. business process related manuals, documentation and training.
To be fair, all these meta issues are equally relevant to Google Apps/Docs, in addition to some basic user skill quirks. So far, the feature-set of the Google platform constitutes a superior candidate for a replacement of local desktop apps. But, for many users, the close coupling of local MS Office Applications with Web Apps may provide the 'least pain' path in moving towards the cloud paradigm, e.g. supporting teleworking from home.
The collaborative capability of the two offerings are very hard to compare. The MS check-in / check-out workflow model is easier to adopt and understand through an analogy with the 'track changes' function in regular apps and yet, Google provides real-time editing by multiple users – a more powerful feature; however, in many cases it may take some time to capitalise on these strengths. My experience with Google Wave (same capability) proved that exposing users to tools which are far removed from their familiar ground, introduces hard to overcome barriers and often a sense of dis-empowerment, resulting in a reluctance to learn.
We will see the future before long.

For those interested in the technicalities discussed here Anthony Franco's blog posts “Mass Confusion: The hysteria over Flash, Silverlight, HTML 5, Java FX, and Objective C” and “Mobile Strategy Best Practices“ are well worth a read. Also see mobiThinking article “Mobile applications: native v Web apps – what are the pros and cons?” or these slide presentations (both are on slideshare.net so you can read text outlines instead of flicking slides) on platform specific HTML Apps by Davy Jones "HTML5 is the Future of Mobile, PhoneGap Takes You There Today"Mikko Ohtamaa Building HTML based mobile phone applications and a general Mobile RIA overview by Myles Eftos Smart phone development. A year passed since writing this post and finally I found a programmer's view on this dilemma which is very well expressed in a slide presentation and an article describing a concept of TransMedia Applications by Martin Fowler of ThoughtWorks.

doodling and sketching

The act of sketching is an integral part of the creative process at the early stages of concept development. Three aspects of sketching constitute salient and unique characteristics.
- First, intuitive externalisation – subconscious exploration of the problem space and dynamic idea generation.
- Second, ambiguity of components and their relationships – implying fluidity of the captured structure and inviting ongoing interpretation.
- Third, expression of an unfinished/rough state – a form still open to changes and likely to stimulate further exploration and one which clearly isn't indicative of the aesthetic intentions for the final design.
The first is by far the most critical aspect and one that is least understood. The physical aspect of sketching is often recognised as a bridge between the idea 'in the head' and its representation in the world. The process of informally expressing uncertain ideas appears to have an energising quality of experiencing the emergence of new insights and a dialogue with the resulting form.
I'll offer an analogy which may help to explain this phenomenon; Doodling on paper when one is waiting or bored, presents a comprehensible scenario. The activity begins without specific intentions and is predominantly a physical act, but in time it transforms towards an explicit refinement of the emerging features. The later intentions are motivated by subliminal exploration of the rough form which itself provokes or suggests possible interpretations.
An issue related to this analogy arises in context of a widely accepted hypothesis that conceptual sketching with computer tools takes on a different and an inferior work-flow pattern. For example, Vinod Goel of UC Berkeley studied designers generating ideas by sketching on paper or using a drawing program. He noted that the designers sketching freehand quickly followed the initial idea with several versions. However, those who used a drawing program tended to focus on refining the initial design, instead of generating additional variations.
Therefore, the question is; do we have any evidence that people doodle with their computer tools? I've never observed this to be true in my extensive experience with graphic and design professionals who are constantly exposed to an environment most suitable to facilitate behaviours like doodling. Perhaps my analogy confirms the hypothesis of the diminished productivity in computer based idea generation and innovative problem solving tasks.

effective tools for creative design

A well designed tool is in effect 'invisible' in its application. The user embodies the utility (physical or cognitive) and performs their task in synergy with the tool unaware of the actual artefact. A simple example may be a hammer, the user doesn't observe the constantly changing physical situation, but rather hammers away as if their arm naturally included the tool's utility. Subtle feedback of the actions performed with the tool is critical to support that state. In the ideal situation, a level of trust arises in the process, allowing for a degree of risk inherent in the operation to be readily overlooked. As a result, the intent of the high level task constitutes the only concern for the user.
In context of creative design, the only such tool remains to be a pencil/pen. Unlike routine production work (CAD, image manipulation, etc.), the tasks of generation, exploration and interpretation in design process are not well supported by computer based tools. Even the most basic and essential act of sketching, feels stilted and non-expressive using a computer. It seems that, a sense of ambiguity, avoided in the final product, is essential to the intuitive process of externalising and exploring early ideas. These pursuits appear to be severely compromised in the on-screen paradigm, irrespective of the additional capabilities on hand e.g. versioning, undoing or attribute editing capabilities.
A likely exception, in my experience, may be the creation of music. In its traditional form, it is focused on motor skills and dexterity directed at instrument interfaces and these can be readily integrated into a computer environment. However, even this proposition is debatable if we consider that the gestalt state of musical creation and performance appears to exist in the head and muscles on an intuitive level, quite apart from the creative problem solving paradigm. Rhythm and melody tend to adhere to various rules, similar to grammar in language and these have no equivalents in common sense reasoning or general intelligence which underscore creative design. Consequently, while the design processes and methodologies can be readily taught, creativity just like Artificial Intelligence encounters impassable boundaries. Common-sense knowledge continues to elude our attempts to formulate, let alone formalise, a general set of context-free rules beyond discrete specialised heuristics. Similarly, the creative or inventive endeavors remain an illusive challenge as far as tools are concerned.

the dilemma of UI design based on metaphors

Originally published in 1996 on my personal website at infomania.net and www.thepla.net/~piotr

______________________
the problem
My first design of a user interface (UI) was the Sydney Water Control Database front-end implemented by Applied Control Systems in 1992. This demanding project confronted me with a significant dilemma, namely: Metaphor based user interfaces, like the Sydney Water’s database, promise an immediate advantage of defining the system’s functions, purpose and current state to the user through a graphic illustration of the process involved. This methodolgy may seem appropriate to a broad range of UI implementations, but isn't always the optimal solution.
The problem continues to constitute one of the most important decisions in UI design projects. In my experience, the metaphor solution is only viable for systems whose actions are substantially reinforced or literalily represented by the given metaphor. In many cases however, the user is destined to be burdened and fatigued by the superfluous visual information, especially in prolonged or recurrent practice. Additionally, the metaphor may not lend itself to accommodate all functions of the interface, a problem readily illustrated by the design of this website, see the logo and additional navigation elements it incorporates. (screen capture below)

A tabbed notepad metaphor website UI showing a detail from the Sydney Water Control Database front-end implemented as a symbolic metaphor representing the system structure.

______________________
the approach
The challenge therefore, is to define orientation and navigation methods specific to the context and to employ an appropriate level of abstraction with high visibility of the interactive elements within it. A critical objective in the design process is to identify correct idioms and meaningful symbolism for visual and verbal representation of the system at hand.
Every aspect of the interface must be aimed at sign-posting and guiding the user towards meeting their goals with minimum effort. Meeting these criteria, in my opinion, is more integral to the success of the concept than the often debated ‘look and feel’ or aesthetic characteristics and gimmicks which only attract the attention of first time users.

(C) 1996 Piotr Kulaga aka Charlette Proto.

the medium shapes the message

This essay was originally published in 1997 on my personal website at infomania.net and www.thepla.net/~piotr
______________________
more channels
Two dominant trends are currently shaping the ‘voice’ of our culture. On one hand, the previously clear market boundaries between computer software and service providers, media/entertainment industry and the telecommunication companies are constantly being eroded and on the other, the multiplicity of sources is escalating beyond expectations. While new technologies and media delivery routes are being introduced with varying degrees of acceptance we are witnessing an explosion in volume and diversity of source material produced.
When we examine the new ‘on-demand’ media delivery systems like the Internet, this argument takes on exaggerated proportions. Whether in pursuit of entertainment, personal interests or in research activities, Internet users establish millions of private channels in a combination of media environments at any given moment.
The logistics of the network-based media distribution platform allows information publishers and in particular those catering to marginal interest groups, to explore the opportunities for development of specific audience focused content. This as realised, further increases the choice, leading to growth in specialised services offered by mainstream publishing houses and stimulates the emergence of dedicated market-segment media providers.
______________________
a new kid
In my opinion, the new entrants amongst content and technology providers will continue to gain a substantial share of the changing entertainment market at the expense of the established media and software houses. This will be inevitable during the impending transformation and re-positioning of the respective industries.
The majority of creative output from the mainstream media organisations is currently driven by budgetary constraints synonymous with the established high cost production and concept development methods. As a result, many publishers rely on proven ‘formula’ productions and repackaging of material whilst adding value or re-focusing for the projected audience. These restrictions more often than the creative talent have defined existing media/entertainment markets but they will not shape the less rigid network-based media operations.
Many problems will need to be overcome before the adaptation of existing programs for on-line access is at all possible. Obvious complications include copyright restrictions, royalty fees and moral rights arising from the specific contracts entered into for the production of the existing media material. More importantly, the very business structure and practices established in the media and publishing industry at large will hinder their initial ability to dominate the on-ramp marketplace.
______________________
making copies
Excluding TV broadcasting and cinema, the majority of media and software publishers rely on mass manufacturing, printing/packaging processes and distribution of 'copied' material as core infrastructure components of their operations. This commercial model is contrary to the ideals of online business practice.
In the virtual environment, ‘copies’ are generated on-demand and essentially do not represent a major part in the overall structure of the revenue generating processes. Whilst online delivery requires substantial resources like network bandwidth and server capacity to meet the demand for product (synonymous with the process of handling retail copies), these can be efficiently tailored as necessary, in complete transparency to the user.
A system geared to respond to the on-demand characteristics of the network can achieve the highest levels of operating revenue optimisation. Needing only to satisfy the total volume of requests for new material, regardless of the product mix or success of any individual item, gives publishers and resellers the most cost-effective model for distribution.
Unfortunately, this scenario doesn’t easily translate to the traditional business practices and/or skill/employment structures established in the industry. The current workforce evolved around a warehouse of stock and a costly distribution chain and may soon be an obsolete link.
Additionally, the established pricing structures for software and entertainment titles reflect the inherent costs of manufacturing copies, packaging, distribution and the losses associated with unsold and returned stock, rather than the real cost of development and user support needed by the given product.
Due to the above factors, in the immediate future we should not expect the media or software ‘majors’ to deliver the revolutionary content, since they too are starting from scratch, if not constricted in their operations by the inherent infrastructure.
______________________
online cowboys
Some of the recent technology blunders have now become legendary. It is important to keep in mind the Apple e-World and Microsoft Network episodes which help us realise the extent to which the computer industry underestimated the impact of the Internet on its future. Put aside Microsoft’s ability to survive a 180 degree turn in its Net policy to become a monopoly player on all levels and you will find that the thrust behind major success stories has been the very nature of the medium itself.
The 1990 legislation of the ‘High Performance Computing Act’ and the memorable address to the US Congress by the then Senator Al Gore ‘The Data Superhighway’, set plans for the development of fibre-optic computer networks and stimulated many technologically competent entrepreneurs to start up companies devoted to the new medium. In an unforeseen grab for credence and recognition, small and not so accomplished service providers, able to capitalise on the policy push for ‘information-age’ technology, all over the world penetrated the established ‘blue chip’ IT business elite. The amazing rise of Netscape Corp. and Yahoo! or the later examples of Sausage Software and Oz-Email in Australia set a dangerous precedent for exaggerated expectations on behalf of participating market observers. These in fact, remain the major credibility hurdle for the online industry at the end of the 1990’s.
The potential for commercial gain in the ‘virtual world’ is not obvious or by any means implicit. Outside of the US, where the desired security technologies are an exclusive implementation and home shopping has long been part of the culture, the majority of users shy away from commercial participation on public networks. At the same time a number of the international niche market operators are successfully attracting immense public attention and avalanches of financial market interest.
Most businesses pioneering sales on the Web however, have been unable to demonstrate profit in their operations and in some cases appear to manifest absolute focus on attaining market presence alone. The Internet indeed offers a perfect backbone for the virtual shop-front but in my opinion, the benefits of online commerce will never be realised by the likes of amazon.com and online CD retailers or their investors.
The full advantage of the network’s retail value and the efficiencies it promises can only be capitalised on by providing ‘soft’ product. The network offers a unique ability to conduct transactions and deliver on request any form of media which can be digitally encoded. This alters the operating criteria, especially for the software and entertainment publishers able to respond to the changing perspective.
However, this won't occur until products like the .MP3 audio files and .MPG video are accepted by the industry as viable formats for distribution and suitable commerce standards/regulations are enforced internationally. Only in this context will we see any viable inroads for the dominant media publishers being made on the network.
At present, predominantly the providers working outside of traditional market restrictions who specialise in services catering to the esoteric needs of the public, continue to explore new commerce opportunities and together with the participating audience, constitute a large part of the overall growth of the activity on the networks.
______________________
net rules ok!
In this new global playing field, it is increasingly difficult to evaluate your supplier. Anything other than advertising or network presence may be out of reach. This creates an ideal climate for a budding industry in the service and associated technologies as well as content development and production areas.
A new marketing platform has emerged, setting its own standards. Online technical resources are fast replacing the traditional telephone and on-site product support that was once available from locally based suppliers. Practices like ‘download before you buy’ promise to improve efficiency by eliminating a substantial portion of misguided purchases where incompatible or inappropriate product would have been selected. Whilst providing substantial savings for the publisher, this regrettably also shifts the responsibility for competency into the hands of consumers.
This trend is readily evident in the software industry which increasingly chooses to provide a ‘pay per minute’ help-desk only for its product. The consumer, by definition may find this a cold reality yet all publishers will follow Microsoft in the practice of ‘pay as you use’ for support on product freely distributed. I'm not suggesting that anyone will go broke giving away software, possibly the opposite. We’ve known this argument since Henry Ford postulated: "I'll give everyone in the world a model T if they guarantee they will buy the spare parts from me" and while this was far from practical with Ford vehicles, it presents a new vision for software delivered at user’s expense.
______________________
lonely rider
The very nature of hyper-media empowers the users to play a more active role as customers, both in the selection process and more importantly - the on-demand participation with access to services at all times. These include net broadcasts, hypermedia, newsgroups, feedback and real-time activities like Chat and IRC.
Online participants define their own extent and progress of interest, making the experience an almost exclusively ‘one-on’ environment. In a group situation, the inherent interactive characteristics of the hyper-media are lost, or in fact, present an immediately apparent hindrance, just like someone switching TV channels in spite of other participants. Any situation other than a demonstration or a training session, where the audience passively engages in a presentation, is bound to encounter these limitations.
Yet, this brings us to the very strength of the hypertext world where the audience and consumers are able to control the selection of source material with an unprecedented degree of opportunity to individually evaluate the product.
______________________
me too
It is debatable whether there is any evidence for an unprecedented rise in creativity and expression throughout the communicating world, yet the data entering the global information systems, seems to be taking on explosive growth patterns.
Some of this deluge is merely the countless ‘me too’ responses to the attention the ‘new media’ has generated amongst the established public forums. These will diminish as the ‘lime-light’ of popular media moves on to the next attraction.
By far the majority of content on digital networks today is created and survives in an entirely new paradigm. The contributions keep coming, just because now they can. Like any space, if it is there, it will be filled guaranteed... and if it grows?
Virtual space has opened opportunities for many to publish their own data and explore the traditional boundaries of public access to information. Whilst the real value of material remains unrealised, it is this very phenomenon which dramatically accelerates the state of play in the IT industry and creates new consumer markets for technology eg. the search and indexing systems which are suddenly so prominent.
______________________
garbage in – garbage out
The majority of data is simply accumulated to the maximum available capacity. Once it is committed to storage, data will usually survive for the life of the system and end-up on the scrap pile if it is not transferred to the next generation of computers.
However, at large information remains hidden behind the indexing regime, if indeed one is at all observed. There will never be enough time to clean-up the discs or evaluate the contents since this process is not as transparent as it is with conventional junk. For instance, old copies of documents don’t get scuffed corners and often display the date of the last system transfer or upload instead of the actual record.
Unlike anything in the material world, data can be produced with relatively negligible requirements of source material or energy. Even less effort or resources are required to make copies of existing data and additional versions. These will forever hog storage and indexing space of our computer systems, which hopefully will just cope with the bottom line expectations of the users.

(C) 1997 Piotr Kulaga aka Charlette Proto.