silence?

Since the begining of 2012 I've given up on busting my gut to promote this blog and found very lively debate forums on LinkedIn groups.
______________________
read
One day I'll update some of the posts I've started here and post summaries of the most interesting forum contributions, but the reality of a few ticks amongst hundreds of viewers, simply fails to motivate these days.

An idea generation process in creative design

I don't claim that I come up with a multitude of equally valid and novel ideas every time, but aiming to do so I employ a vaguely specific and prescriptive technique. The methodology is aimed at non-routine and innovative classes of design where the expected outcomes are weakly defined and instead intend to anticipate the curiosity of users, and in turn motivate them to set out new goals.
1. Given a brief or a prospective invention, I sketch freehand as many roughs as possible resisting any tendencies to refine elements within. At this stage "I think of: WHAT?" and intentionally focus on capturing my responses to the given problem through motor actions alone, consciously suppressing cognitive expressions of common sense.
This approach allows a multitude of STRUCTURES to emerge without forming attachment to any individual instances of thought. When similarities or repetitions become evident I generally treat them as refinements and terminate the task allowing myself to divert my attention away from what I'm trying to solve. This process is similar to what an architect does at an early stage - e.g. sketching lines not knowing if the line will represent a form boundary or an ornamental part.
2. Upon return, ideally the next day or after a lunch break etc. I reflect on the sketches "thinking: SO WHAT?". Generally three or more candidates stand out for further refinement or cross fertilisation with elements of the discarded lot. If my intuition supports a strong conviction about the captured forms e.g. diagrams or layouts, I'll present them for a preliminary "everything is open to change" critique by any stakeholders at hand. The rougher and more ambiguous the better - when faced with loose and obviously unrefined forms people are least intimidated to change or contribute their own alternatives.
3. “Thinking or asking myself: WHAT NEXT?” I refine the early structures in terms of BEHAVIOURS and FUNCTIONS the user may evoke when confronted by the product I'm trying to design. In the ideal situation, the functions and behaviours suggested by each sketch elicit the intended attributes of the conceptual product or the brief represented in my designs.
Once again I face other project stakeholders, regardless of completeness, but this time I'm also focusing on the metrics which would be appropriate in the event that the design(s) were to undergo heuristic evaluation or a preliminary user panel test. For example, drawing on B.J. Fogg's model of simplicity, I may attempt to identify the scarce resources which are most likely to be relevant in context of the prospective/targeted users. Simplicity is a function of the scarcest resource for a given person in a particular situation. They include: time, money, physical effort, brain cycles, social deviance and non-routine.
A decision on the flow of iterative cycle
At this stage, if all goes well, we may have a compelling ”birds eye view” of one or several competing designs of the product (salient features of the whole system, but lacking any depth) ready for low fidelity prototype tests (rough and still quite ambiguous for the same reasons as before) or otherwise informed by the failure I return to step 1.
Providing that a sufficient consensus to move on was reached and testing or heuristic critique confirmed this decision, an interpretative or metric analysis of outcomes will elect the wining design if a number have survived this far. Next, I opportunistically (whatever seems the easiest to solve) pick individual components/elements to develop through the same workflow pattern of WHAT?, SO WHAT and WHAT NEXT? With a bit of luck, completing the easy parts may give me insights towards solving those which eluded me at the outset.

web3 - is anybody out there?

______________________
situated social web
...

web3 medium - myAvatar myLife

web3... continued
______________________
what next?
Just how is my Web3 vision going to take root in a space where, as I said before: anything more complex than a light switch is out and so many “game changing” venture claims fail? The answer is simple – we are already half way there and I'm just gauging an existing trend.
Clear signs exist that a more personalised and bespoke future of Social Web is near. An aesthetically richer medium capturing style, creativity and the compulsion to express more than in-your-face comments and slapstick pics. Apparently, the fastest growing SW network at the moment is SoundCloud (http://soundcloud.com/). It is just like the original MySpace on steroids - encouraging users to record or create music and sound, share it with the world, building communities in the process.
______________________
have a look at this; but I can't be bothered sorry
Since I believe that an underlying desire for richer profiles exists, will SoundCloud be a killer app? I don't think so, the narrow-band interest communities are old hat now – nerds calling out ad nauseam “have a look at this” will never make a social scene. That's been tried to death and represents nothing more than Flicker or YouTube for sound – not a community but simply a space to dump stuff. Most likely it will be a nice gadget popping up on a wall of every budding DJ in your crowd. Another reason for SoundCloud to fail is the lack of technical prowess. For one, I've never experienced a run without a number of errors popping up, but more importantly the user experience is just so poor e.g. the sound cuts out as you browse – surely it could play on until you select another track.
Frankly, Facebook's “listen and watch what your friends are doing” is likely to have more grip once the initial flood of outcry from users reluctant to accept new features and changes dies down. The recently introduced timeline, on the other hand could prove to be a dud, even if it enhances the means for sharing one's life with “friends”. It smacks of the tedious – look at my snaps from the holiday, when we tend to yawn looking for a way out. The point I'm trying to make is that push is out and pull is in, a voyeuristic peek at your friends is OK, but slide nights faded to obscurity very long ago. There is a fine line somewhere between “jack ass” videos or one's intimacy exposed and the vain look at me calls.
______________________
my id (the antonym of ego)
The appeal of the secret or hidden side in all of us is constantly growing in the otherwise conformist world. The need to reveal it is parallel in every sense, even the public eye increasingly becomes a witness of the extremes. The disparity between the self and the mundane act we present everyday will collapse in time, but for now an outlet for the more extrovert facets needs to be found.
Few years ago I read a very instructive comment regarding SN identity: “we create a parody of ourselves on Facebook and before we know it we become it”. It resonated with me like a revelation, I was experiencing it in Second Life. Perhaps “parody” doesn't quite capture it and should be qualified by pointing at the need to make a joke of it, we simply can't face our fantasies easily. Fetishes or even quirks are not welcomed either, so perhaps the “joke” identity is often the only way to come out and reveal the other self.
This paradigm is far more obvious in Second Life - the concept of an avatar identity is a given and the meaning implied through it is widely accepted and readily understood. First time noobs seem perplexed, but soon someone tells them it isn't MSN and they would win more friends being a truck or a lizard then running around as a “boy next door” screaming “will someone cyber with me now?” I just didn't get SL until I made my first female avatar, I had plenty of opportunity to chat with people at the local pub in real life, so I'd seldom log just to be there (I started when I was studying it). Needles to say, before long my id (alter ego) became an obsession and the boundaries began to blur. In time I realised that I've recreated my identity against all the expectations of conformity in the world around me. However, I'll tell this anecdote in another post.
TBC

web3 – to be or what to be? that is the question

As years go by I ask myself: isn't it time to follow up on the essay I wrote back in 1996 - “the medium shapes the message”? Perhaps now that practically all of those speculations have been realised, finally the time has come. Incidentally I re-published it at the start of this blog.
In 2007 I came across a rhetoric posed by Richard Bartle “to be or what to be? that is the question” - since then it has been my sig line and an inspiration for all kinds of fantasies about the internet. It is only fair that I should consider this notion as a theme for contemplations on the future of the web.
Web3 is fast becoming the industry buzzword. The enabling technology is pretty much in place, all we need now is the next generation of killer applications. There are some good technical ideas talked about a lot, but technicalities rarely inspire more than Sci-Fi plots and this isn't what I intend. Without boring you to death, for the most part I'll attempt to adhere to the emotional aspects of what's to come.
______________________
technical future or failure – briefly
Obviously the most significant term we hear today is the “semantic web” and perhaps we may be finally ready to realise that rather old dream. Visionary technology always takes time. Ted Nelson proposed a very idealistic conception of hypertext (including the term) back in 1963, he was intuitive enough to call the project Xanadu. It is a very illusive goal - XML === semantic web and even earlier DKNF, also aimed to achieve the same thing with very little success. For those thinking of fashion, DKNF (Domain Key Normal Form) is a methodology described by C.J. Date in early 1970s prescribing the generation of uniform database entity keys across knowledge domains. Similarly to XML, the implementation of ubiquitous DKNF hasn't been achieved yet, even if some examples of “enterprise keys” (poor cousin of DKNF) and common XML tags adopted on industry-wide basis do exist.
Just like Artificial Intelligence, semantic web is a great promise, which encounters a brick wall upon any attempts to broadly generalise the rules. The problem of the representation of common-sense knowledge presents barriers which appear to elude practical approaches tried so far. Some philosophers point to the “embodied cognition” as a main reason for this difficulty. The constructs of knowledge and its representation are formed in relation to the human experience and our sense of the body. We have an intimate knowledge of the world around us, but capturing it gets ever harder the more generalised and universal this description is intended to be. In essence, as we tend towards the general, typologies and taxonomy replace attributes and a consensus about the significance of discrete entities becomes contentious in various domains of expertise.
Perhaps given a pragmatic approach, solutions of limited scope will be adopted and prove to be effective in providing some practical value for now. This outcome would be far from ideal, but more realistic considering the utopian nature of the problem. Other solutions could also eventuate in unanticipated ways, e.g. emerge from the chaos of user content tagging... we will see. If anyone is actually interested, I'm happy to help with knowledge engineering issues and elaborate further, but I don't wish to make this argument any more obscure or unapproachable now.
______________________
an intuitive prelude to the next web
Taxes and death are said to be the only certain future, but so is appetite once a taste for something new is experienced. Web2 or Social Web has given us the appetite, so we are ready for more and have some idea of what works and what is simply lame or just another 'me too' attempt at best. Web3, in my view, will be a stage when our avatar matures and gets a life beyond the juvenile bitching temperament so dominant online in the last few years.
Is that it? Just a few words: avatar, life and emotions. Yes and to the contrary, this is a huge ask. The future of the Web is you, me and our lives. Reality Mining is here and the companies behind it aren't likely to lose interest in developing the infrastructure and the services based on 'big' data. Web-cams, smart-phones, geolocation, real-time content, etc. may be good, but there is more to come and the next thing is very close even if it hasn't taken-off yet.
A vision of virtual identity is coming into being – avatar, emotional states, sense of presence, shared situations and experiences. These allures will inevitably continue to penetrate our existence in increasingly ubiquitous and pervasive ways enabling a sense of participation and the fuzzy rewards implied by such states. For now, the hardest part is describing a paradigm without suitable analogies or practical metaphors. Let me try without attempting to exhaust the possibilities.
Imagine presenting/communicating a scenario through a medium able to mush-up a real world representation with avatar performances derived from any given situation, all under your control. Another way to visualise this, might be a high fidelity fusion of Second Life avatars in a Google Earth Stree View context with sound, in a totally arbitrary collage of content from real and virtual space, performances and chronology of real-time or from a persistent store of past events.
I think there may be a few laughs in that, just like the incidentally recorded or intentionally set-up videos we watch on YouTube and it probably would constitute a great source of content to post there as well. Perhaps you have a few ideas too, feel free to comment.
______________________
simplicity rules
The greatest aspect of doubt in the above scenario is the notion that “easy” and “simple” always win on the Web and the more complex capabilities seem to die in the long run. The most common view regarding the success of Facebook over MySpace, previously the dominant SN platform, was the plain interface and limited possibilities in comparison with the open-ended style of the predecessor. This was largely true, in the days when blogs and MySpace pages were a new thing, both required a fair bit of creativity from the user. However, the great majority somewhat lacked in that respect and felt intimidated, even if it may have seemed that everyone was writing a blog or posting pictures and music tribute pages. In reality very few actually wrote anything and those people were strongly motivated by one thing – a chance to gain readers and ride the tide of wide media attention obsessed with the new phenomenon. Similarly, MySpace gave people a chance to have their own Web presence and an outlet for their aspirations and creativity.
When Facebook emerged, everyone got a chance to participate with very little effort and since the capabilities were so poor, nobody felt like a poor cousin to the more creative show-off crowd. The masses flooded in and at large it satisfied the need to share snapshots of their lives. Seven years on it is still growing, but we can't be deluded that a “wall” and few things added as time went by is the ultimate function set. Indeed, Google+ surprised me by providing such a blatant duplicate. Considering the earlier attempts to innovate the mediums (Gmail and Google Wave), I was naive enough to expect more. Popularity of Facebook is growing not so much because it is precisely what people want, but because the platform is so ubiquitous now that everyone else needs to join it just to post a comment on anything going on around them. Yet another factor is the age group spread, kiddies and the aged are getting in on the act too. What does it say about G+? We will see, but I doubt that many will switch. Perhaps it will have more impact on Linkedin than the target in its sight.
The Facebook login/membership/profile has a lot to offer as far as new SN possibilities are concerned and increasingly it is integrated into new portals, as well as allowing for external data to be imbued into its space via the apps API. These days, the prolific population and the extent of participation on Facebook constitutes a tempting proposition to plug into that crown instead of building a competing presence from scratch. Providing that the Facebook policy remains committed to this model and all intelligence suggests it isn't likely to be compromised if the anticipated company float is to ever get off the ground in a big way, there are a lot of possibilities for extending the FB virtual identity and profile in a number of ways.
______________________
virtual face-off
This is the most likely path towards a more complex participation model and additional capabilities being explored in an effective way. In the event that Google would succeed in wiping Facebook off the face of the Earth, the controversy of real name identity aside, the same if not broader opportunities would most likely exist. The vision of a next generation richer virtual identity is very likely to encompass all manner of entertainment forms our devices are and will be capable of. When SN, as we know it now, was born AJAX (Web2) technology was just a geek way to build dynamic content using HTTP request object in JavaScript, which didn't have a vernacular name. However, what proved to be the most significant step for Social Web and is about to become a lot more prominent, is not the technology, but identity. Ever seen anyone log-into iGoogle intentionally? Not very likely, it would be such a nerdy thing to do. Google struggles so hard to track and target user profiles and yet the opposite is true of FB and it is worth a lot more than my estimate, actually I better double it, perhaps double it every Christmas starting with 2011...
______________________
situated being and sharing
What sets this new Social Web apart is situatedness (yup most likely a new word 4u), the integrity of communication in the real world is approximately: 40% context, 40% emotions of body language and tone and only some 20% words. Current SN communication is devoid of the main cues we are so reliant on and the effect of this is obvious to anyone reading a “wall” or “stream” for the first time, even more obvious (by contrast) to the users of richer SN platforms like Second Life (SL). By no means am I suggesting Social Web could adopt SL or a similar platform, but the key aspects of situated social events and hangouts together with a richer sense of identity and individual style will inevitably penetrate our online and real life space.
My own experience of being Charlette Proto in an experiment lasting a few years of a 24/7 virtual life in SL is by all means an extreme example. However, it provided me with a lot more than the metaphors and analogies to visualise some of the likely aspects of a future social life and a heightened expectation of SN at large. Above all, it illustrated just how rich a virtually situated life can be and a possible explanation of how the naff angst and all this frustrated bietchin of today's SN is born out of the struggle to squeeze more of our life between the lines.
TBC

brainstorming primed for invention

______________________
conception scenario
In essence, I create in a three step process. First, I contemplate and research the context of the problem and consider possible ideas, this step rarely results in an actual design. Then, as I gain some distance from the initial task – generally preoccupied by something else, the answer dawns on me and I record it in a freehand sketch. Similar experiences are reported by the majority of people with extensive experience in design, invention and creating all manner of things. Beginners tend to understate the connection and the structure of these three phases and are likely to say that the idea just came to them out of the blue.
The pattern of these three stages: seeding, subconscious reflecting and the synthesis or discovery is worth focusing on in an attempt to identify a methodology for creative problem-solving. The task of defining an effective technique, especially in context of brainstorming scenarios, will readily benefit by harnessing that pattern and in particular the introduction of a priming/seeding stage. Ronald Finke (Finke et al 1992) describes this as “generation of preinventive structures”, a key stage of the Geneplore model - a methodology for collaborative creative design process.
Finke's (Generate/Explore) model prescribes a deliberate temporary suspension of professional expertise in order to maximise the emergence of unique solutions. Habitual application of expertise carries the implication of routine outcomes limited by mental blocks imposed by existing heuristics. Novel and original solutions, and invention in particular, require a fresh and open minded approach to the problem at hand and the consideration of the likely possibilities.
In my opinion, this is an ideal opportunity to exploit the synergy offered by brainstorming sessions instead of expectancy and reliance on the conscious avoidance of conventional thinking by people with deep and specialised field expertise. The procedure would consist of a priming brief presented by experts e.g. scope and usability objectives of the proposed project, followed by a period of reflection, which may involve other activities and finally a core session for the exploration of multiple contributions in a traditional brainstorming meeting.
______________________
left vs right brain
A core criteria in adopting a suitable methodology is actually a very personal thing. The critical guiding principle is the compatibility with individual tendencies in cognitive disposition. At large, people can be classified as left or right hemisphere dominant in their processing of mental tasks. A balance of both is a desirable trait in everyday life, but problem solving situations benefit from a fusion of the two extremes. Collaboration between people representing the opposite sides of the spectrum, generally approaches optimal results by combining the strengths of their respective predispositions.
A number of cognitive dimensions can be identified as characteristic of the left and right brain dominance. These include: decomposition vs holistic (reductionist vs systems view), sequential vs random, symbolic vs concrete (diagrams vs prototypes), logical vs intuitive, verbal vs visual and reality vs fantasy. On one hand, left hemisphere dominant people are more productive at decomposing problems into components and meticulously constructing the desired outcome. Yet on the other, the right hemisphere dominant people work in a less structured manner, tending to veer off on tangents and discover unexpected results. Creative problem-solving situations call for a combination of both groups in order to deliver innovative and disruptive outcomes.
______________________
our objectives
The dimension of innovation can be distilled as a continuum from routine craft to creative invention. The routine path is grounded in specialised insight and leads to predictable results with great certainty, but little or nil innovation. As the process tends towards the creative path with an increasing potential for discovery of novel and disruptive product, the confidence in reaching valid outcomes inevitably diminishes. For every innovation in the marketplace there are thousands of failed concepts we may never hear about, even if some of these possess ample characteristics of killer applications. Similarly, countless mundane “me too” products will never reach the proverbial 15 seconds of fame for anyone to take notice, regardless of the stated objectives; like aiming to change the landscape of the industry or other claims expressed at launch time.
The salient question is: how to increase the likelihood of desirable results on the unpredictable path of invention?
The big-picture answer of course isn't simply a design process issue, many other factors contribute to success and countless examples readily illustrate this. What would iPad's chance of success be, without Apple and its religious consumer loyalty born with the iPod and iPhone range. Even without brand equity as such – consider Google without the initial support at Stanford or Facebook launched at some two-bit school instead of Harvard. All three examples are very cryptic, so I'll forgo any attempt at a deep analysis beyond a few thoughts to ponder on.
Apple is a company specialising in easy to learn and a pleasure to use products without any user research. Google's success has more to do with the demise of Altavista under consecutive takeovers of the parent company Digital, first by Compaq and in turn HP, than its page rank algorithm, the speed or for that matter the proposed business and monetisation model – there wasn't one. Similarly, Facebook only began to look like money when MySpace was strapped onto News Corp. for a hefty 580 million dollars and slapped with a very ambiguous expectation of delivering the online market to Murdoch – a disaster considering the widely reported stats like 4 in 10,000 click-through rates on ads in SN.
The company and its culture is yet another aspect on the bumpy road to product success. We need people with the conviction and the passion that Steve Jobs had selling jelly-bean iMacs despite the dated bubble design in the days of sexy and slim LCD screens. When Jobs returned to Apple, then on a brink of bankruptcy, he found the frustrated Jony Ive who showed him the iMac designs shelved by previous management and so the company was salvaged, to top the profit stakes now. Can such a transformation be replicated? Certainly it can, but tweaking the knobs while making exaggerated claims is not likely to deliver the results.
The answer is: talent, audacity, open-mindedness and of course a suitable process.
______________________
take the world by storm
Give or take some prescriptions and the earlier priming session, a brainstorming event could closely follow the well known process and guidelines. Anything goes as far as the idea capture and documentation are concerned. Those comfortable with verbal or bullet point descriptions should be mindful that input from other group members needs to be integrated on ad-hock basis, even if the resulting lists aren't quite so orderly or as neat as intended. Mind maps should be considered for their inherent flexibility of structure. However, feehand visual representation provides the most fluid and open-ended means to capture the rich picture of the problem and solution domains and my “doodling and sketching” article may be worth reviewing.
The idea generation process, should diverge as wide as possible at the early stages. In the event that an emerging concept reaches a wide acceptance or various ideas appear to converge on a common theme, the process shifts into deeper (more specific) exploration of its entities and personas or actors, including all implied aspects of context situations. Each stage of the divergent “what if?” exploration should reach an optimal phase when different “stories” present common attributes (converge) and this may be documented in greater detail.
______________________
low fidelity prototypes
Lo-Fi prototypes provide a very rich form for exploration and capture of ideas. Paper, pens and scissors, post-it notes, Lego or even screen based contraptions; whatever the medium - prototype props will aid and empower the brainstorming process. By making a simulation of Use Cases more tangible especially for the “right brain” participants, prototypes extend the dialogue and motivate both different and deeper persona descriptions. I strongly encourage role-play and the use of props (Lo-Fi prototypes) with particular attention paid to the scarcest resources (see below) for each persona and situation, with a view to minimise the effort where it is feasible.
______________________
guiding principles
The core guidelines are: simplicity and reflection in action. BJ Fogg proposed a very convincing model of “simplicity” as a function of the scarcest resource in a given situation. The scarce resources include: time, money, physical effort, brain cycles, social deviance (unfamiliarity) or non-routine actions. Ideally a use case would present just one scarce resource, but more often a combination of of a few will need to be balanced out.
My own interpretation of Gary Rolfe's “Reflection in Action” (or Reflective Practice Model) comprises three questions posed by practitioners at each stage of concept exploration or production cycle. What if? So What? and What Now? In this form of analysis a description of the situation is obtained, which leads to scrutiny of the outcomes and construction of knowledge learnt in the process. The net effect of this approach constantly improves the product itself and promotes individual mastery of the practitioner.
Aside note: In context of workplace Performance Reviews and self improvement, the principle of Reflection in Action provides an opportunity to elaborate on the integrity of one's responses and the influence of the impressions arrived at in the process of asking the three questions.
______________________
judgement
Providing that ideas are elicited in the brainstorming scenario, electing the most promising candidates for further development and obtaining a consensus in order to reach a shared vision and a sense of commitment, is likely to present a formidable problem in itself. Competent design professionals should be able to address this dilemma by recognising and championing the merits of outstanding concepts, however, the validity of this methodology and the whole process is clearly in someone's hands or rather at the peril of their emotional intelligence.
______________________
handover
While not a terminal stage in absolute terms, the design and engineering members of the team would take over the project at this stage, introducing the regular delivery process e.g. Agile, Kanban, Lean etc. Further critiques and reviews of the product under development lend themselves to additional iterations in settings analogous to the described brainstorming sessions and benefit from the “check-in” of results against the originally stated goals and objectives.
______________________
why not?
If company culture and value structures are greatly at odds with the notion of unpredictability and trust placed on methods with a relatively low level of confidence, perhaps a more pragmatic approach of routine processes should be adhered to, forgoing the potential for invention and the goals of delivering disruptive products.
______________________
REFERENCES
Ronald A. Finke, Thomas B. Ward and Steven M. Smith, 1992, “Creative Cognition: Theory, Research and Applications”.
Another worthy read on the subject: "The Brainstorming Process Is B.S. But Can We Rework It?".

creativity in brainstorming scenarios

Fundamentally, the situation is that of a small team engaged in the process of problem solving in a commercial setting.
Brainstorming is an approach to collaborative idea generation. Its proponents emphasise the benefits of high levels in novelty, quantity, diversity, unusualness and imaginativeness of the identified solutions. (Finke and Bettle 1996)
The term “brainstorming” and its methodology were introduced by Alex Osborn in 1956. It is defined as a set of objectives, which a small team (ideally seven people or less) should observe as they take turns in a loosely organised and casual in nature meeting. These are: Defer judgement. Encourage wild ideas. Build on the ideas of others. Go for quantity. One conversation at a time. Be visual. The last two guidelines have in time been appended to Osborn's concept and countless other variations have been proposed since, including an anonymised computer based version, but these are beyond the intended scope of this post.
If you are not convinced you aren't alone and parodies aside, I'll attempt to outline some of the criticisms of brainstorming with a notion of “harnessing” the intended benefits of crowd based creative power through a conscious realisation of its likely pitfalls.
______________________
no such thing as bad ideas?
The main objective of brainstorming is the generation of multiple ideas in an unstructured manner looking at problems from different perspectives and allowing possible solutions to emerge in a natural and unconstrained way. However, typically it is employed to solve particular problems, rather than a means of exploring new possibilities. Additionally, as a result of conformance it tends to result in “groupthink”, which all too often, leads to adoption of low value, impractical or unrealistic solutions. (Finke and Bettle 1996)
The assumed openness to elicit untested ideas and explore a wide range of possibilities is implied by the prescribed freedom from critical comments and evaluation. In my opinion, this can be a weakness in the process. In brainstorm speak this is called: a “no bad ideas” environment. The rationale behind this prescription is: that there isn't a greater killer of invention than comments like “that won't scale” from one's peers. (Bernard and Summers 2010) Perhaps remarks like “that won't scale” should be acknowledged by all parties as a contextual fact in the consideration of the proposed idea. Terminating the exploration due to a single constraint would be premature, but I hate the thought that the above mentioned vocal person would be later assigned to deliver the execution of the flawed concept against her or his judgement, simply because the team committed to develop it further without hearing their expert opinion.
This guideline (defer judgement) is most commonly broken in real life and I may be encouraging it further, but being exposed to or rather supported by the breadth of a collective team expertise, is a well recognised strength in collaboration.
______________________
everyone can be creative
The assumption is that; given the right environment, an incubator with all the right conditions to stimulate ideation and making it fun - props, toys, drinks and snacks, a clearly stated free from constraints attitude - “no bad ideas”, a means to capture the ideas - whiteboard, pens and paper, cameras, voice recording etc., when every member in the group shares their inspirations given an equal chance, an invention is likely to emerge spontaneously. This idealistic view is widely accepted, but is it a realistic and reliable path to generate creative work?
Going by my experience, I can only say: sometimes and not very often. As an exclusive approach it simply represents a try-hard design utopia, if not a means to undermine the importance of roles traditionally defined as creative. After all, not everyone fixes their own car or builds their dream kitchens hands-on. Skills are a combination of both, the learnt technique and an inherent intuition we have for the subject of work we pursue, hence the expectations placed on brainstorming may actually be somewhat naive.
First, a number of personality issues quite at odds with brainstorming need to be considered; Pulling raw ideas out of one's head and expressing them for the benefit of the team might encounter some barriers, after all not everyone is an extrovert. Then we need to think outside of the box, when most people feel more comfortable with low risk regular routines. And finally, the sudden impulse/flash of creative insight is recognised as strongly associated with intrinsic motivation, which by definition is the opposite of the more common extrinsic motivation based on set goals, scores/gains and the recognition of value by others.
Further, we have the issues of productivity and tangible value outcomes, which need to be addressed. If the un-primed ideas actually flow, will other members of the team really be challenged to take ownership of the implied potential of this creativity or simply idle awaiting their turn or worse - freeze with apprehension of their own defeat. Rather than perceiving it as an opportunity to express their own ideas and views, many people feel quite confronted by the notion of addressing their peers. The 'productivity loss' relative to working individually, is well documented by various studies; evaluation apprehension despite the prescribed rule, social loafing as a result of shared responsibility and limitations of short-term memory resulting in forgetting one's inspirations while others express their own - to name just a few. (Finke et al 1992) The evidence of cop-out contributions by employees expected to come up with individual pursuits towards the common good is abundant and YouTube is a great beneficiary of those.
There is more, just look at various cultural values ingrained through education and life experience at large. For example, Confucius philosophy with deep influence on the Chinese, Korean, Japanese, Taiwanese and Vietnamese cultures, fosters hierarchies of mastery and a great regard for traditions, the past and its wisdom. This has a profound impact on the individuals' concept of professional practice, knowledge and its masters entrusted with responsibilities. In many ways, these principles are at odds with the notion of informally guessing or stumbling upon solutions, which brainstorming embraces at its core. Of course this doesn't stop Asian people from being good brain-stormers, but it certainly undermines the sense of respect and trust in the adopted results.
______________________
can we fix it and should we try to?
Apart from providing an aspect of fun at work, brainstorming gives us a personal voice. At a time when social culture at large strays further and further from stereotypes, the work environment suffers from a 'job hat' syndrome and the hat appears to be worn ever tighter. Therefore a personal voice is an increasingly important asset, a forum with an attentive audience is valuable in terms of feeling involved, empowered and responsible for reaching high level objectives.
Above all, brainstorming offers a means to suspend the routine approach to tasks and one's reliance on expert insight and instead, act in a freer or fresher manner. This indeed, is where we may discover the genius inventor within us.
Problem solving based on specialised expert insight provides a high level of confidence in both, productivity and predictability of results. In other words, if a problem is trivial and the desired outcomes are well known in advance, there is no need to embark on discovering anything new and a craft-like approach will be the best fit for the job. However, an expectation of disruptive outcomes or invention calls for maximum spontaneity in the creative process, proverbial “thinking outside the box” in order to reach optimal results. I feel that non-routine thinking presents a significant enough value in design to dedicate my next post "brainstorming primed for invention" to an attempt at making brainstorming really “work” in creative pursuits.
______________________
Ronald A. Finke and Jonathan Bettle, 1996, “Chaotic cognition: Principles and Applications”.
Alex F. Osborn, 1956, “Applied Imagination”.
Chris Bernard and Sara Summers, 2010, “Dynamic Prototyping with Sketchflow in Expression Blend”.
Ronald A. Finke, Thomas B. Ward and Steven M. Smith, 1992, “Creative Cognition: Theory, Research and Applications”.