Posted: April 12th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Most VC readers must already be familiar with Data Flow 2, the most recent number of the growing Data Flow family, published in February 2010. Featuring several interviews with New York Times Graphics Editor Steve Duenes, Art+Com Director Joachim Sauter, and one with Andrew Vande Moere and myself, the book is an inspirational compendium of hundreds of projects. The work presents itself as a portfolio book, featuring an array of innovative approaches (many featured in VC), which are incredibly provocative and inspiring. Due to its coffee-table nature, the title doesn’t aim at an in-depth analysis or theoretical reflection on the displayed projects and defined categories, but acts primarily as a stimulating showcase of ideas.
As Andrew Vande Moere eloquently states in his review, the foreword doesn’t quite align with the book’s content, since most of its assertions for insightfulness are not necessarily substantiated in the variety of executions showcased throughout the book. Nonetheless, Data Flow 2 is a great source of inspiration for anyone working in the domain of data visualization.
Posted: April 7th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
A very appealing spline based 3D form in Processing that represents the bass frequency and puts it into motion.As Christian Bannister explains:
What would the bass look like? What would it be like to touch it and manipulate it directly and visually in real-time? These are some of the things I am trying to get at in this sketch.
Posted: April 7th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Unfortunately the Call for Participants is now closed, but nonetheless this initiative should be interesting to follow. Synthetic Aesthetics aims to bring creative practitioners and those who are expert at studying, analyzing and designing the synthetic/natural interface together with the existing synthetic biology community to help with the work of designing, understanding and building the living world.
From this thought-provoking premise:
Biology has become a new material for engineering. From the design of biological circuits made from DNA to the design of entire systems, synthetic biology is very much interested in making biology something that can be designed.
The project asks:
Can collaborations between synthetic biology and design inform and shape the developing field of synthetic biology—the engineering of new and existing biological entities for useful means? What insights can design offer in designing microscopic entities for a human-scale world? Can design learn from synthetic biology?
Posted: February 10th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 2 Comments »
(Originally written in May, 2009 - Extinct section in VC)
Recursive cycles of innovation happen within many different areas, most notably in the domains of Art and Design. Even those who are completely out of touch with fashion can observe how the field is constantly rediscovering its past, recycling ideas, and incessantly mixing new trends with old influences. In Economics, it’s a well-known fact that business trends and stock markets expose processes that tend to repeat itself in a more or less regular fashion. In this domain, the most notorious study was developed by Nikolai Kondratiev, a Russian economist who observed a series of long 50-year cycles in modern world economy. The Kondratiev waves, as they were later called, consist of alternating periods of high and low sectoral growth, which for most cases have proved to be accurate since the end of the 18th century. In fact, from the different angles you can look at history, there’s always someone who is ready to point out a specific recursive cyclical pattern.
But even though we are accustomed to this type of process in many fields, we always think technology, and in particular the computing industry, is immune to it. After all, our technological progress is made of several vertiginously rising paths that share the absence of a review mirror. There’s no point in looking back, or even considering that some aspects of the past might re-occur, simply because there’s nothing to learn from it. Nonetheless, the act of uncovering patterns and potential cycles, particularly in an industry that prides itself of its continuous fresh innovation, is an extremely appealing exercise.
The pattern I’m about to describe is divided in 3 periods, starting at the foundation of computing history and ending with a set of strong indicators of a third new cycle. It tries to make the case that even though individual technological components evolve in a really fast and unique pace, the way in which they interrelate and behave might follow some level of cyclical occurrence. Separating these 3 stages are roughly two periods of 25 years. The first cycle started in the late 1950s, with the widespread of the mainframe-computing model, followed by the second stage in the beginning of the 1980s, with a succession of events that lead to the emergence of the highly powerful laptop computer. Finally, the latest and most recent cycle has just started. Lead by Cloud Computing and the Netbook phenomenon, everything seems to indicate this will be a major movement for many years to come. From an initial centralized model, to a dispersion of increasingly independent machines, the new drift foresees storage and computing drainage from many portable computers and the return to a model based on data centrality. The main distinction this time is that instead of the mainframe, the “Cloud” emerges as its central interconnected hub. Although recurring cycles might be a noticeable pattern for how data is stored and accessed, there’s still a unique common thread to all these stages: a continuous straight progression towards mobility.
First Cycle | The Central Mainframe
Characterized by one central computer, responsible for most of the storage and processing power, linked to a series of satellite terminals, the mainframe-computing model has been a key protagonist in the history of the modern computer since the late 1950s. By then people accessed and interacted with immensely large mainframes through a variety of linked terminals that suffered significant changes over time. From early punchcards and teleprinters, to latter video computer displays with their familiar green and amber screens, interactive computer terminals through the 1960s and 70s had one thing in common: its powerless unintelligence and dependency on the crucial mainframe.
Second Cycle | The rise of the laptop and its Portability Effect
By the end of the 1970s, specialized terminals, as the precursors of modern-day portable computers, were becoming smarter. Initially packed with terminal emulation software, these machines were detaching itself from the almighty mainframe and becoming self-sufficient entities with their own processing capability. This process opened the path for the desktop computer, with early pioneers like Apple II and IBM 5150 leading the way. The course of computing mobility had just started and it would be just a matter of time before laptop computers started to materialize and eventually replace desktop computers.
In most part, the computing industry in the past 25 years has seen laptops dramatically increase their computing power and rival traditional desktop PCs. In 1986, battery-powered portable computers had about 2% of market share worldwide. Today there are more laptops than desktops in businesses and general use, and in 2008, more laptops than desktops were sold in the US. Even though some mainframes have evolved into the supercomputers of modern age, uncovering important aspects of science, like the structure of cosmos or the vast neuronal network of the human brain, the true hero of this story is the laptop. These tiny compact boxes have become potent full-fledged machines with the added benefit of portability – an essential attribute in an increasingly mobile world. But how long will the mobile processing power rush last? Or has it in fact reached a tipping point? Will the hero of the last decade be partially or entirely replaced with its new weaker adversary: the Netbook?
Third Cycle | Cloud Computing: The Personal Mainframe
Cloud Computing is seen as the next computing trend and the key driver for the third cycle of data centrality. It can simply be described as a “style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet”. This model is extremely in tune with our contemporary lifestyle. We currently access the web through a variety of devices, with different features, shapes and sizes. And while the amount of access points keeps increasing, the ability to synch content between them is still an immense headache, where rarely (if ever) we see a satisfactory user experience. The Cloud paradigm substantially alleviates this problem, by increasingly relying on services, information and applications stored on online servers – the vast Cloud landscape – that can then be accessed at anytime from anywhere, as long as there is an online connection.
On an enlightening special report by The Economist, entitled “Let it rise”, it’s asserted that the Cloud is already a common phenomenon, where 69% of Americans connected to the web use some type of “cloud service” including web-based e-mail or online data storage. Many companies are following this feverish movement and in the same article by The Economist, Irving Wladawsky-Berger compares it to the Cambrian explosion some 500m years ago, when the rate of evolution speeded up, in part because the cell had been perfected and standardised, allowing evolution to build more complex organisms.
Another indicator of this turning point is the Netbook. In part driven by a global economic downturn, the Netbook phenomenon might prove to be a long-lasting craze. Characterized as a lightweight, economical and energy-efficient laptop, especially suited for wireless communication and Internet access, this new mobile computer has been all over the news lately. In a recent article from Newsweek magazine, they uncovered a growing market trend in Japan, as more consumers are opting for netbook computers. While PC sales went down 4 percent on the fourth quarter of 2008 in Japan, sales of netbooks shot up 43 percent. The recession has been an important driver for this consumer shift, since people have become more sensitive to price, but the growth of cloud computing is its vital ingredient. There’s also an undeniable rational deduction behind this behavioral change. Many users start questioning themselves if they actually need all the bustling speed and storage, when their computers are mostly used for emailing and web browsing.
On the diagram shown above we can observe a series of laptops and netbooks linked to a central Cloud, which is in turn surrounded by a multiplicity of abstract devices. Many of these future devices will not require a vast processing capability, since they will work as rendering windows for the same online service - flowing incessantly through all of them. The role of “windows for services” might be what awaits many future mobile computers, including mobile phones. This points out to the growing value and significance of online services and applications, as the vital glue across many systems and platforms.
Predictions always feel like empty promises and there can be no certainty on what the future holds. Is the Netbook a predecessor of a future class of dumb terminals entirely dependent on the Cloud? Is Cloud Computing really going to be the next big thing? If so, how long will it last? Will it prove to be a long lasting shift or will people grow increasingly wary of their privacy and lack of ownership and return to a similar model as we have today, and in the process instigate a fourth cycle of data centrality?
Posted: February 8th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
In the end of January I was in Paris for a couple of events: a talk at ESAG - École Supérieure de Design, d’Art Graphique et d’Architecture Intérieure and one-week workshop at ENSAD - École Nationale Supérieure des Arts Décoratifs. The lecture at ESAG was great and the lengthy discussion that followed, with the packed audience of inquisitive students, was extremely involving.
The workshop at ENSAD was a longer and very fruitful engagement, part of IDN: Identité numérique mobile (DMI: Digital Mobile Identities), a new program of ENSAD Lab - a research unit for creation and innovation gathering graduate research students and professionals to collaborate and discuss on the contemporary challenges of design. Lead by Remy Bourganel and Etienne Mineu, IDN is meant to investigate the flows, the emerging patterns and representations that qualify a new digital mobile identity. In this context, students at the workshop explored different ways of analyzing and visualizing different social dimensions relevant to them. Some of their initial studies can be seen here.
Posted: December 8th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 6 Comments »
*UPDATE* - The UK Met Office data was made available today: http://bit.ly/7mWJbx (scroll to the bottom) or direct link: All.zip (3.7MB). Here’s the corresponding station codes per country.
All our eyes are now set in Copenhagen, in what’s in my view one of the most important meetings ever held. Following the overhyped data fraud scandal, which is being targeted by many skeptics as the “Climategate”, the UK Met Office decided to make available the data for more than 1,000 weather stations from across the world, in order to hush divergent voices. The dataset, to be released this week, is the subset of stations evenly distributed across the globe and provides a “fair representation of changes in mean temperature on a global scale over land”, said the Met Office in a statement. “We are confident this subset will show that global average land temperatures have risen over the last 150 years.”
The data has not yet been made public, but once it does I will update this post. In case you cannot wait for this dataset, the group of scientists at RealClimate.org have recently put together a cohesive list of datasources, from innumerous satellites and stations, on sea levels, sea temperature, surface temperature, aerosols, greenhouse gases, and many more. In a blog post announcing the list, the group states:
Much of the discussion in recent days has been motivated by the idea that climate science is somehow unfairly restricting access to raw data upon which scientific conclusions are based. This is a powerful meme and one that has clear resonance far beyond the people who are actually interested in analysing data themselves. However, many of the people raising this issue are not aware of what and how much data is actually available.
This represents a great momentum for all of us involved in Visualization at large to be part of the solution and deliver a clear unequivocal view on what’s happening with our planet. Regardless of how you label your practice, Information Visualization, Data Visualization, Information Design, Visual Analytics, or Information Graphics, this is ultimately a call for everyone dealing with the communication of information for human reasoning. Let’s roll up our sleeves!
Posted: December 8th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
As some of you might have noticed I’ve been away for a while, so I just wanted to give you a short update on my whereabouts. I got married in the end of October and had a fantastic honeymoon in Asia. After arriving in mid-November I was in Lisbon PT for a talk at a conference organized by the Society for News Design, and later in Sheffield UK for a talk at the School of Architecture (SUAS). More recently I was with Santiago Ortiz, Aaron Koblin, Ben Cerveney, Jose Luis de Vicente, and an amazing group of people at Visualizar’09, in MediaLab Prado, Madrid. The workshop went great and I had a really good time. Apart from all of the above, I’ve been busy with this (an update to follow shortly).
Posted: September 24th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 3 Comments »
On a recent review of the VC database I was simply astounded with the amount of dead links in a variety of indexed projects. Worst of all was that some became completely untraceable, possibly gone forever. This was an exasperating moment. VisualComplexity.com, regardless of how insignificant it might seem in the big scheme of things, is still a compact archive of an epoch, showcasing tendencies, methods, discoveries, and fragmented insights into the modus operandi of our contemporary society. For many people searching for those lost projects, VC is not a curated starting point, but a frustrating dead end, leaving them with a slightly bitter taste in their mouth. Sure, some authors could be more organized and concerned with the documentation of their projects, but that still wouldn’t solve the issue. The main drawback we are dealing with is the inherent medium.
At the present time, we have access to countless cuneiform documents, including economic records, letters, and literary works from early Sumerian times, produced over 4,000 years ago. Many of these artifacts are essential to our understanding of the values and practices that shaped this ancient culture. Can we aspire the same longevity for our modern cultural artifacts? Most certainly not. We would be lucky if a tiny percentage of our documents lasted even a fraction of that time scale. We are so infatuated with our digital virtuosity that we are blind to its ephemeral nature. It’s curious how at this stage in civilization, when we are collecting more data like never before, in quantities that would astonish any nineteen-century researcher, we are storing it in one of the most fragile and volatile mediums, if and when we store it at all.
Yes, initiatives such as the Internet Archive are critical, but still remarkably far away from any realistic aspiration. In a captivating article by The Wall Street Journal, journalist Robert Hotz explains how “Scientists who collaborate via email, Google, YouTube, Flickr and Facebook are leaving fewer paper trails, while the information technologies that do document their accomplishments can be incomprehensible to other researchers and historians trying to read them.” As we communicate through more and more channels, our trail becomes thinner and thinner. And as time passes by, our chances of recovering precious records become ever so diminute.
Hotz provides an illustrative case on this critical challenge. When the leading evolutionary biologist William Donald Hamilton died in 2000, the British Library received a pile of his research papers, together with letters, drafts and lab notes. Among these documents were 26 cartons containing “vintage floppy computer disks, reels of 9-track magnetic tape, stacks of 80-column punch cards, optical storage cards and punched paper tapes”, some dating back to the 1960s. In order to extract many of the crucial stored information, “that could illuminate an influential life of science”, researchers at the Library had to arduously assemble a “collection of vintage computers, old tape drives and forensic data-recovery devices in a locked library sub-basement.”
I found this account extremely alarming and unsettling, particularly since it addresses a mere 40 year gap. Forty years! Now imagine the difficult task of historians in 400 years from now. We can do more and we have to. Otherwise, we run the risk of becoming a memoryless generation, or even worse, the dark digital age.
Posted: September 23rd, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 2 Comments »
It’s now official, VC Book is moving forward in full steam! Too early to ask about the title, structure, or price. But you can subscribe to the blog’s feed or my tweets and I will keep you posted on any updates.
Posted: September 22nd, 2009 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Here are two interesting lists on open visualization libraries and software:
*Note to self: need to update my old list of graph visualization tools.
Posted: September 6th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Tom Beddard, who has been previously featured in VC, recently post a series of striking images generated with his Fractal Explorer and based on the outstanding work of nineteen century German biologist Ernst Haeckel. Beddard’s Fractal Explorer is a plugin for Adobe Photoshop and After Effects that allows the creation of fractals based on any chosen image. As Tom Beddard explains:
The Fractal Explorer plugin is a couple of Pixel Bender filters that will generate Mandelbrot and Julia set fractals to any power in real-time. The first filter is for standard fractal colouring whereas the second is optimised to use a technique called orbit trapping to map an image into fractal space.
You can see many of the examples generated by Beddard on his flickr set.
Posted: September 3rd, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 5 Comments »
* These are a series of observations on the original Information Visualization Manifesto. I will occasionally reiterate my points, from the manifesto and subsequent comments, for the sake of argument *
It has been great reading all the different reactions to the manifesto and witnessing the open debate within our community. I feel it’s extremely healthy and invigorating. As said before, if nothing else, I hope this will result in a positive introspection of our discipline.
The manifesto has been circulating a lot over the past few days, but I was particularly pleased to read the supportive posts from Robert Kosara (eagereyes.org), and Patricia McDonald (BBH Labs).
The manifesto introduced two different propositions: (1) a list of ten guidelines or principles, that has been for the most part consented by everyone; (2) a more controversial proposal for a stronger divide between Information Visualization and Information Art. I will therefore address these two parts separately.
The list of considerations has been praised and well received by most people, but the following points have raised some concerns:
Form Follows Function
Some questioned this old maxim and considered that in the case of Information Visualization it should be rephrased to: Form follows Data. I do oppose to this interpretation. As explained in the manifesto: “Form doesn’t follow data. Data is incongruent by nature. Form follows a purpose, and in the case of Information Visualization, Form follows Revelation.” I provided the wooden chair analogy, but in a reply to one of the comments I provided a second metaphor. I think of data as the unwrapping of a recently purchased piece of IKEA furniture. Looking at all the components scattered across the floor, you cannot avoid but feeling slightly puzzled on what to do next. You can either look at the paper or embrace your creativity and generate an alternative object. But in both cases what will determine the form/shape/layout of the final piece will be the intent. Therefore, many derivations can result from this view, Form follows Intent, Form follows Purpose, etc. I simply decided to go for the most worn out, yet explicit statement - Form follows Function.
Interactivity is Key
This principle merits the reflection of us all. Jerome Cukier and David McCandless challenged the need for interactivity in Information Visualization. In a broader definition of Visualization I would certainly agree with this notion: Information can be successfully conveyed in either static or interactive mediums. However, we have to question what really sets us apart from other parallel fields such as Information Design or Information Graphics. I do believe one of the crucial benefits of Information Visualization is interactivity – which also explains why this area emerged from Computer Science and HCI. It’s this “computer-supported, interactive” visual representation of data that truly makes us different. And this unique offering “becomes imperative as the degree of complexity of the portrayed system increases”. The representation of complex networks is just an instance where interactivity should be mandatory.
The Power of Narrative
This point in particular, should have been read as a consideration, rather than a strict guideline. Nevertheless, “the question of narrative seems to lie at the heart of this Manifesto; the need to pose a specific question of the data and to weave coherent themes and stories from it.” explains Patricia McDonald. Kim Rees and Moritz Stefaner disputed well this prerequisite on every execution, particularly the type of self-made narrative that emerges from exploratory executions. And since we’re talking about analytical tools, this will be a recurrent occurrence. I like to compare this practice to a game designer who lays out an intended context, rules and narrative for the game, but then has this moment of delight when users engender their own narrative, their own path. This is intrinsic to the conception of Information Visualization as a discovery tool.
Look for Relevancy, Aspire for Knowledge, Avoid gratuitous visualizations
As Moritz Stefaner pointed out, these three principles could have easily been merged into one, since there’s a strong overlap between them. However, I do feel they’re individually significant and assertive to merit their own independent call.
Don’t Glorify Aesthetics
This principle has been very debated and since it relates closely to the second part “The Divide”, I will address it in that context.
The proposed divide between Information Visualization and Information Art was by far the most contentious issue on the manifesto. It quickly derailed to a debate on Aesthetics versus Function and Art versus Science, and we all know how slippery these domains can be. Aesthetics is not the easiest term to define, and neither is Art. I do however look at aesthetics from a functional point of view, and to that extent I also do not appreciate the occasional discredit by the scientific community. If we decompose some of its tangible elements – color, shape, composition, symmetry – we can immediately perceive how aesthetics is an integral element in the usability and legibility of any execution in the realm of Information Visualization. But it’s not the only one.
One of the greatest qualities of Information Visualization, and certainly the main reason why I became interested in the field, is its diversity. It’s able to bring in people from all sorts of disciplines and backgrounds in a remarkably cohesive manner. I look at our practice as a dense voronoi treemap (I could not avoid using this metaphor), where many branches of knowledge come together for the common goal of revelation. This setup works well, when all elements of the equation operate in a sensible way, but when one escalates in detriment of the others, then we have a problem. And lately one end of the spectrum has been pulled in a much more sturdily way. The fallacy of Information Visualization being a conveyor of “pretty pictures” is drastically threatening the field, by undermining its goals and expectations. “We have to fight that or risk the trivialization and marginalization of visualization as an analytic tool”, asserts Robert Kosara on a recent review of the manifesto.
So what do we do at this stage? We either try to restore the balance or we acknowledge a clearer divide. I do not think we can have a convoluted multipurpose all-encompassing practice. This will be detrimental to us all.
As I stated in the manifesto, I think Information Visualization and Information Art can and should coexist, by learning from each other and cross-pollinating ideas, methods and techniques. In fact, I believe this separation is beneficial for both areas, since it frees them from inadequate concerns and aspirations. Information Art can really push the creative limits of data and in the process generate new techniques and algorithms, but also spark public discourse – one of the great qualities of Art. On the other hand, Information Visualization can mature as an analytical tool, providing a reliable and critical source of insight to many future challenges we are still to face.
We have observed a similar symbiotic process between Art and Cartography for many centuries. Several authors have written on this subject and David Woodward, in his Art & Cartography, published in 1987, describes in detail how numerous artists were influenced by cartography, and how maps themselves were hanged in walls as pieces of art. Nevertheless, both fields have always kept their independent paths and individual aspirations.
The divide between Information Visualization and Information Art is not clear-cut and there’s certainly space for a thriving middle ground. Labels can also be changed. Kosara even suggests we start using the term “Visual Analysis” as a substitute for Visualization. This is something we can certainly discuss as a community, and there are many benefits to do so. Once we all agree on what we do, it will be easier for others to recognize the goals and boundaries of our growing discipline.
* You’re welcomed to continue the discussion here, or add your comment to the original post on the manifesto *
Posted: August 30th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 31 Comments »
“The purpose of visualization is insight, not pictures”
Ben Shneiderman (1999)
Over the past few months I’ve been talking with many people passionate about Information Visualization who share a sense of saturation over a growing number of frivolous projects. The criticism is slightly different from person to person, but it usually goes along these lines: “It’s just visualization for the sake of visualization”, “It’s just eye-candy”, “They all look the same”.
When Martin Wattenberg and Fernanda Viégas wrote about Vernacular Visualization, in their excellent article on the July-August 2008 edition of interactions magazine, they observed how the last couple of years have witnessed the tipping point of a field that used to be locked away in its academic vault, far from the public eye. The recent outburst of interest for Information Visualization caused a huge number of people to join in, particularly from the design and art community, which in turn lead to many new projects and a sprout of fresh innovation. But with more agents in a system you also have a stronger propensity for things to go wrong.
I don’t tend to be harshly censorial of many of the projects that over-glorify aesthetics over functionality, because I believe they’re part of our continuous growth and maturity as a discipline. They also represent important steps in this long progression for discovery, where we are still trying to understand how we can find new things with the rising amounts of data at our disposal. However, I do feel it’s important to reemphasize the goals of Information Visualization, and at this stage make a clear departure from other parallel, yet distinct practices.
When talking to Stuart Eccles from Made by Many, after one of my lectures in August 2009, the idea of writing a manifesto came up and I quickly decided to write down a list of considerations or requirements, that rapidly took the shape of an Information Visualization Manifesto. Some will consider this insightful and try to follow these principles in their work. Others will still want to pursue their own flamboyant experiments and not abide to any of this. But in case the last option is chosen, the resulting outcome should start being categorized in a different way. And there are many designations that can easily encompass those projects, such as New Media Art, Computer Art, Algorithmic Art, or my favorite and recommended term: Information Art.
Even though a clear divide is necessary, it doesn’t mean that Information Visualization and Information Art cannot coexist. I would even argue they should, since they can learn a lot from each other and cross-pollinate ideas, methods and techniques. In most cases the same dataset can originate two parallel projects, respectively in Information Visualization and Information Art. However, it’s important to bear in mind that the context, audience and goals of each resulting project are intrinsically distinct.
In order for the aspirations of Information Visualization to prevail, here are my 10 directions for any project in this realm:
Form Follows Function
Form doesn’t follow data. Data is incongruent by nature. Form follows a purpose, and in the case of Information Visualization, Form follows Revelation. Take the simplest analogy of a wooden chair. Data represents all the different wooden components (seat, back, legs) that are then assembled according to an ultimate goal: to seat in the case of the chair, or to reveal and disclose in the case of Visualization. Form in both cases arises from the conjunction of the different building blocks, but it never conforms to them. It is only from the problem domain that we can ascertain if a layout may be better suited and easier to understand than others. Independently of the subject, the purpose should always be centered on explanation and unveiling, which in turn leads to discovery and insight.
Start with a Question
“He who is ashamed of asking is afraid of learning”, says a famous Danish proverb. A great quality to anyone doing work in the realm of Information Visualization is to be curious and inquisitive. Every project should start with a question. An inquiry that leads you to discover further insights on the system, and in the process answer questions that weren’t even there in the beginning. This investigation might arise from a personal quest or the specific needs of a client or audience, but you should always have a defined query to drive your work.
Interactivity is Key
As defined by Ben Shneiderman, Stuart K. Card and Jock D. Mackinlay, “Information Visualization is the use of computer-supported, interactive, visual representations of abstract data to amplify cognition”. This well-known statement highlights how interactivity is an integral part of the field’s DNA. Any Information Visualization project should not only facilitate understanding but also the analysis of the data, according to specific use cases and defined goals. By employing interactive techniques, users are able to properly investigate and reshape the layout in order to find appropriate answers to their questions. This capability becomes imperative as the degree of complexity of the portrayed system increases. Visualization should be recognized as a discovery tool.
Cite your Source
Information Visualization, as any other means of conveying information, has the power to lie, to omit, and to be deliberately biased. To avoid any misconception you should always cite your source. If your raw material is a public dataset, the results of a scientific study, or even your own personal data, you should always disclose where it came from, provide a link to it, and if possible, clarify what was used and how it was extracted. By doing so you allow people to review the original source and properly validate its authenticity. It will also bring credibility and integrity to your work. This principle has long been advocated by Edward Tufte and should be widely applied to any project that visually conveys external data.
The power of Narrative
Human beings love stories and storytelling is one of the most successful and powerful ways to learn, discover and disseminate information. Your project should be able to convey a message and easily encapsulate a compelling narrative.
Do not glorify Aesthetics
Aesthetics are an important quality to many Information Visualization projects and a critical enticement at first sight, but it should always be seen as a consequence and never its ultimate goal.
Look for Relevancy
Extracting relevancy in a set of data is one of the hardest pursuits for any machine. This is where natural human abilities such as pattern recognition and parallel processing come in hand. Relevancy is also highly dependent on the final user and the context of interaction. If the relevancy ratio is high it can increase the possibility of comprehension, assimilation and decision-making.
Time is one of the hardest variables to map in any system. It’s also one of the richest. If we consider a social network, we can quickly realize that a snapshot in time would only tell us a bit of information about the community. On the other hand, if time had been properly measured and mapped, it would provide us with a much richer understanding of the changing dynamics of that social group. We should always consider time when our targeted system is affected by its progression.
Aspire for Knowledge
A core ability of Information Visualization is to translate information into knowledge. It’s also to facilitate understanding and aid cognition. Every project should aim at making the system more intelligible and transparent, or find an explicit new insight or pattern within it. It should always provide a polished gem of knowledge. As Jacques Bertin eloquently stated on his Sémiologie Graphique, first published in 1967, “it is the singular characteristic of a good graphic transcription that it alone permits us to evaluate fully the quality of the content of the information”.
Avoid gratuitous visualizations
“Information gently but relentlessly drizzles down on us in an invisible, impalpable electric rain”. This is how physicist Hans Christian von Baeyer starts his book Information: The New Language of Science. To the growing amounts of publicly available data, Information Visualization needs to respond as a cognitive filter, an empowered lens of insight, and should never add more noise to the flow. Don’t assume any visualization is a positive step forward. In the context of Information Visualization, simply conveying data in a visual form, without shedding light on the portrayed subject, or even worst, making it more complex, can only be considered a failure.
Posted: August 27th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Made by Many | BBH
Tuesday I was at BBH in London, through an invitation by Justin McMurray @juzmcmuz from Made by Many. The audience had great questions following the lecture, which always makes for a richer experience. It was also great to meet some of the people from the cool agency Made by Many and BBH, particularly @stueccles and @PatsMc, and hang out with some old friends.
Information Kinetics: Egoviz
Last weekend I was in the beautiful city of San Sebastián to give a lecture in Arteleku, in the context of Information Kinetics: Egoviz - a two-week workshop directed by bestiario. It was great to see Santiago Ortiz again, meet Kepa Landa (from Arteleku) and all the students and collaborators involved in the workshop. Here you can know more about the projects developed in this “taller”.
Posted: August 26th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 1 Comment »
It’s still too early to know where the idea of building a TEDViz community might take us, but here’s a post by Michelle Borkin that explains how it all started.
Then it happened, the idea was born: how about “TEDViz”? A whole TED meeting devoted to “visualization”? Visualization in the abstract, in art and design, in business, in the sciences, online and in print… It is one “field” that is extremely important and valued by TED, yet has in many ways been neglected. We instantly started rambling off the tops of our heads all the brilliant visualization designers, researchers, and developers who should have a chance to spread their ideas in only the way TED intended yet have never had the chance to attend TED