Posted: January 25th, 2011 | Author: Manuel Lima | Filed under: Uncategorized | 1 Comment »
When I first started grouping projects in VC by visual method, in June 2007, radial convergence was already the most popular group with roughly eight projects. You can see that early classification in a now-extinct page of VisualComplexity.com, back in the day of June 10, 2007 (thanks to WayBackMachine):
As you can see from the image above, those eight radial convergence projects were amongst the first to be indexed in VC. Interestingly enough, three of them, respectively AS Internet Graph (2002), GNOM (2005), and Circos (2005) are amongst my favorites to this day. Although I had started talking about this method in conferences like MeshForum (San Francisco) and reboot 9.0 (Copenhagen, Denmark) it remained nameless for a while. The label came out from a need to classify this and other layout types within the growing collection of projects. Since the model is essentially defined by a radial ordering of items converging with each other, the title radial convergence became an intuitive fit. However, it was hard to predict the method would take off as much as it did. Within the last three years it has become immensely popular, and it seems that with every batch of new projects added to VC there’s always one exhibiting this favored layout.
There are probably many reasons that can explain this popularity. First, it’s a simple execution. It’s probably one of the easiest and fastest ways to trial or visually convey a relational database. Second, it’s remarkably alluring. Humans have a widely known and documented obsession with the circle and many of its iconographic qualities that have been revered through millennia, such as divinity, perfection, unity, or closure. Third, if we add to the previous reasons the growing availability of data, number of visualization enthusiasts, and easy-to-use software, then we have the perfect conditions for growth, multiplication, and increasing popularity.
Currently there are 33 projects under radial convergence in VC, mapping a variety of subjects, from IP addresses to Facebook friends. Here’s a screenshot of all of them, as of January 24, 2011.
Posted: November 2nd, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 2 Comments »
It has been a while since I’ve posted anything on this blog or in VC. Here’s why:
In mid-July my wife and I left London, after 4 great years living in the English capital. Our last days were packed with parties and plenty of drinks. We moved to Portugal together with roughly 580 Kg (35 boxes) of our stuff - with books being the heaviest category at 220 Kg (to my wife’s annoyance).
Portugal (Lisbon, Batalha, Ponta Delgada, Albufeira)
August and September were spent in different areas of Portugal, enjoying the sun with our family and friends. During this period I also finished the second revision of the VC book. After a long recruitment/visa/relocation process, the day of the big move finally arrived.
On October 18th, 2010, I joined Microsoft Bing as a Senior UX Design Lead, and after a few days of orientation in Redmond, we finally moved to New York City where we’re now living in our temporary apartment.
We’re still getting used to the NYC lifestyle and slowly losing our tourist badges. New York City will be our home from now on, so in case you want to meet or reach me in any way, you know where to find me.
Future + Bing
You can certainly expect more regular updates on VC, now that my life is settling again. In regards to the VC book, it’s currently on its 3rd revision and will be available for purchase next year (more information to come soon). After one year of writing, researching, speaking and occasional consulting, it’s quite stimulating to roll up my sleeves once again and join the great design team at Microsoft Bing. I’m very excited with this new challenge and all its future possibilities. Interesting times ahead…
Posted: November 1st, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
The first time I tweeted about RSA Animate was in July 2010, when I posted about the canny Crisis of Capitalism video. Since then the Royal Society of Arts (RSA) has released a few more videos which are remarkable examples of visual storytelling. If you haven’t seen any of these highly addictive pieces you don’t know what you’re missing.
See all videos here:
Posted: July 26th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 12 Comments »
This is an original guest post by Ricardo Nuno Silva for VC Blog. Ricardo is a Portuguese applications developer with a longtime curiosity concerning the impact of digital technologies in everyday life. You can contact him at firstname.lastname@example.org.
In the last few years many tools and techniques have been developed to help us visualize songs, music and sounds. This post is a showcase of some of these greatest tools. Each one is focused on a particular aspect of this challenging type of visualization.
One of the most common examples of software for sound visualization is the one used in media players. But they usually only translate sound frequencies to shapes and colors on the screen. They’ve been used extensively for leisure, relaxation and dance parties.
The tools in this showcase have a different approach, as they truly “understand” music in its individual notes. Some can be used in real time, while others need to do some number-crunching while analyzing each song.
Below each image there’s the name of the tool or technique, some great video examples, and a link to the author’s site.
If you know other great tools or videos of music visualization, please leave a comment below or via email. Thank you!
Static Visualization of Songs
The Shape of Song by Martin Wattenberg
Narratives 2.0 by Matthias Dittrich
Similar to Sheet Music
Don’t Be Sad by Brad Mehldau
MIDI Music Visualization Videos for Deaf and Hearing Impaired People by Eric Rangell.
Music Animation Machine (MAM) by Stephen Malinowski. See: Beethoven 5th Symphony.
Visualization of Instruments Output
Clavilux 2000 by Jonas Heuer.
Celeste Motus by the Abstract Birds. (via Pedro Custódio)
MuSA.RT - Music on the Spiral Array. Real-Time by Elaine Chew e Alex François.
TypeStar by Scott Garner.
Learning Games through Visualization
Synthesia (for piano) by Nicholas Piegdon.
Animating Virtual Instruments
MIDIJam by Scott Haag. See: MidiJam (I just died in your arms).
Pipe Dream by Animusic. See also: MIDIJam meets Animusic: Pipe Dream
Ljósið by Ólafur Arnalds
Just Colour by Jesper Brevik
See other music-related visualizations @ VisualComplexity.com | Music.
Posted: April 19th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 2 Comments »
Ocean explorers are puzzling out Nature’s purpose behind an astonishing variety of tiny ocean creatures like microbes and zooplankton animals – each perhaps a ticket-holder in life’s lottery, awaiting conditions that will allow it to prosper and dominate.
The inventory and study of the hardest-to-see sea species — tiny microbes, zooplankton, larvae and burrowers in the sea bed, which together underpin almost all other life on Earth — is the focus of four of 14 field projects of the Census of Marine Life
The results from the latest census
revealed spectacular examples of hard-to-see underwater microbes, available in this stunning gallery
of some of the smallest sea species.
Posted: April 18th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 4 Comments »
flame dragon, by peter blaskovic (created in flame painter)
As I was organizing my RSS feeds in feedly, I stumbled upon Gert K. Nielsen’s piece on Visual Journalism, written in March 22, 2010. The venturesome title of Nielsen’s post was “The next big thing in infographics - five criterias and a solution“. Intriguing and stimulating. I was immediately on board. That is until I started reading his five recommendations and final proposed solution. You should read it and take your own conclusions, but I found Nielsen’s piece absolutely bewildering.
- The first recommendation, on the need for computer generated infographics, reads more like a natural progression of the field rather than a recommendation, and is perhaps the most blunt of the list. The second and third criterias are on the other hand a bit more disconcerting.
- “It must be beautiful”, Nielsen says in the beginning of his second suggestion. Nothing wrong with that, but you would expect some reflection on the benefits of aesthetics to follow that statement. However, Nielsen appears to be infatuated with aesthetics solely for its popularity… As he explains, “right now the interest is on presentation much more than the content”.
- But the third criteria is even more baffling. “It has to be somewhat ambiguous”, states Nielsen. Yes, take a deep breath and read it again. And perhaps like me, you’ll wonder, what? But wait, Nielsen immediately comes to our rescue, fundamenting his view with a remarkable argument. “Describing things in black and white and sharp vector lines is too fanatic. Blends are much better suited to describe a complex situation”. Yes, let’s reconsider this fanaticism for objectivity, clarity and content. The future of infographics is ambiguousness!
- (I didn’t quite understand this point, so if someone does please explain.)
- Moving on to his fifth criteria, since I couldn’t grasp the fourth, Nielsen asserts “It needs to work in online presentations too”. This could be an interesting starting point to an analysis on the different contexts of use of infographics and the variety of platforms it could explore, but Nielsen falls short in his explanation, merely stating that infograthics could be integrated in online presentations “perhaps by moving or evolving over time”. A very light investigation, to say the least.
But perhaps the most disquieting part of the post was the solution proposed by Nielsen for the future of the field. As he explains: “The solution I came up with is particles in 3D-programs“. Brilliant! According to Nielsen, there’s no particular downside to 3D particles (think about clarity and legibility), apart from its demanding learning curve, or in other words, the time it takes to learn these “really tough concepts”. In his pursuit for ambiguousness it’s not entirely surprising that Nielsen fails to consider any other drawback to his formula. His proposed solution becomes slightly more tangible, when he presents an example of this vision: Flame. As he explains “the ability to paint with ‘flames’ fits right into my expectation of seeing graphics with an appearance that fits the current times”.
I will not expand too much on how I find this view seriously distressing, since I’ve done it before and again. But this leads to the growing confusion that Robert Kosara alludes in his latest post, The Visualization Cargo Cult. Gert Nielsen’s post, as puzzling as it might seem, is a reflection of a seriously disturbing view, that sees objective infographics as a thing of the past, and appealing ambiguousness as a much better fit for the “current times”. I just hope it doesn’t become a contagious meme.
Posted: April 13th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
NYT - Obama’s 2011 Budget Proposal: How It’s Spent
Rectangles in the chart are sized according to the amount of spending for that category. Color shows the change in spending from 2010.
A zoomable treemap for the life records of the Natural Science Museum of Barcelona, by Bestiario.
Posted: April 12th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Most VC readers must already be familiar with Data Flow 2, the most recent number of the growing Data Flow family, published in February 2010. Featuring several interviews with New York Times Graphics Editor Steve Duenes, Art+Com Director Joachim Sauter, and one with Andrew Vande Moere and myself, the book is an inspirational compendium of hundreds of projects. The work presents itself as a portfolio book, featuring an array of innovative approaches (many featured in VC), which are incredibly provocative and inspiring. Due to its coffee-table nature, the title doesn’t aim at an in-depth analysis or theoretical reflection on the displayed projects and defined categories, but acts primarily as a stimulating showcase of ideas.
As Andrew Vande Moere eloquently states in his review, the foreword doesn’t quite align with the book’s content, since most of its assertions for insightfulness are not necessarily substantiated in the variety of executions showcased throughout the book. Nonetheless, Data Flow 2 is a great source of inspiration for anyone working in the domain of data visualization.
Posted: April 7th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
A very appealing spline based 3D form in Processing that represents the bass frequency and puts it into motion.As Christian Bannister explains:
What would the bass look like? What would it be like to touch it and manipulate it directly and visually in real-time? These are some of the things I am trying to get at in this sketch.
Posted: April 7th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
Unfortunately the Call for Participants is now closed, but nonetheless this initiative should be interesting to follow. Synthetic Aesthetics aims to bring creative practitioners and those who are expert at studying, analyzing and designing the synthetic/natural interface together with the existing synthetic biology community to help with the work of designing, understanding and building the living world.
From this thought-provoking premise:
Biology has become a new material for engineering. From the design of biological circuits made from DNA to the design of entire systems, synthetic biology is very much interested in making biology something that can be designed.
The project asks:
Can collaborations between synthetic biology and design inform and shape the developing field of synthetic biology—the engineering of new and existing biological entities for useful means? What insights can design offer in designing microscopic entities for a human-scale world? Can design learn from synthetic biology?
Posted: February 10th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | 2 Comments »
(Originally written in May, 2009 - Extinct section in VC)
Recursive cycles of innovation happen within many different areas, most notably in the domains of Art and Design. Even those who are completely out of touch with fashion can observe how the field is constantly rediscovering its past, recycling ideas, and incessantly mixing new trends with old influences. In Economics, it’s a well-known fact that business trends and stock markets expose processes that tend to repeat itself in a more or less regular fashion. In this domain, the most notorious study was developed by Nikolai Kondratiev, a Russian economist who observed a series of long 50-year cycles in modern world economy. The Kondratiev waves, as they were later called, consist of alternating periods of high and low sectoral growth, which for most cases have proved to be accurate since the end of the 18th century. In fact, from the different angles you can look at history, there’s always someone who is ready to point out a specific recursive cyclical pattern.
But even though we are accustomed to this type of process in many fields, we always think technology, and in particular the computing industry, is immune to it. After all, our technological progress is made of several vertiginously rising paths that share the absence of a review mirror. There’s no point in looking back, or even considering that some aspects of the past might re-occur, simply because there’s nothing to learn from it. Nonetheless, the act of uncovering patterns and potential cycles, particularly in an industry that prides itself of its continuous fresh innovation, is an extremely appealing exercise.
The pattern I’m about to describe is divided in 3 periods, starting at the foundation of computing history and ending with a set of strong indicators of a third new cycle. It tries to make the case that even though individual technological components evolve in a really fast and unique pace, the way in which they interrelate and behave might follow some level of cyclical occurrence. Separating these 3 stages are roughly two periods of 25 years. The first cycle started in the late 1950s, with the widespread of the mainframe-computing model, followed by the second stage in the beginning of the 1980s, with a succession of events that lead to the emergence of the highly powerful laptop computer. Finally, the latest and most recent cycle has just started. Lead by Cloud Computing and the Netbook phenomenon, everything seems to indicate this will be a major movement for many years to come. From an initial centralized model, to a dispersion of increasingly independent machines, the new drift foresees storage and computing drainage from many portable computers and the return to a model based on data centrality. The main distinction this time is that instead of the mainframe, the “Cloud” emerges as its central interconnected hub. Although recurring cycles might be a noticeable pattern for how data is stored and accessed, there’s still a unique common thread to all these stages: a continuous straight progression towards mobility.
First Cycle | The Central Mainframe
Characterized by one central computer, responsible for most of the storage and processing power, linked to a series of satellite terminals, the mainframe-computing model has been a key protagonist in the history of the modern computer since the late 1950s. By then people accessed and interacted with immensely large mainframes through a variety of linked terminals that suffered significant changes over time. From early punchcards and teleprinters, to latter video computer displays with their familiar green and amber screens, interactive computer terminals through the 1960s and 70s had one thing in common: its powerless unintelligence and dependency on the crucial mainframe.
Second Cycle | The rise of the laptop and its Portability Effect
By the end of the 1970s, specialized terminals, as the precursors of modern-day portable computers, were becoming smarter. Initially packed with terminal emulation software, these machines were detaching itself from the almighty mainframe and becoming self-sufficient entities with their own processing capability. This process opened the path for the desktop computer, with early pioneers like Apple II and IBM 5150 leading the way. The course of computing mobility had just started and it would be just a matter of time before laptop computers started to materialize and eventually replace desktop computers.
In most part, the computing industry in the past 25 years has seen laptops dramatically increase their computing power and rival traditional desktop PCs. In 1986, battery-powered portable computers had about 2% of market share worldwide. Today there are more laptops than desktops in businesses and general use, and in 2008, more laptops than desktops were sold in the US. Even though some mainframes have evolved into the supercomputers of modern age, uncovering important aspects of science, like the structure of cosmos or the vast neuronal network of the human brain, the true hero of this story is the laptop. These tiny compact boxes have become potent full-fledged machines with the added benefit of portability – an essential attribute in an increasingly mobile world. But how long will the mobile processing power rush last? Or has it in fact reached a tipping point? Will the hero of the last decade be partially or entirely replaced with its new weaker adversary: the Netbook?
Third Cycle | Cloud Computing: The Personal Mainframe
Cloud Computing is seen as the next computing trend and the key driver for the third cycle of data centrality. It can simply be described as a “style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet”. This model is extremely in tune with our contemporary lifestyle. We currently access the web through a variety of devices, with different features, shapes and sizes. And while the amount of access points keeps increasing, the ability to synch content between them is still an immense headache, where rarely (if ever) we see a satisfactory user experience. The Cloud paradigm substantially alleviates this problem, by increasingly relying on services, information and applications stored on online servers – the vast Cloud landscape – that can then be accessed at anytime from anywhere, as long as there is an online connection.
On an enlightening special report by The Economist, entitled “Let it rise”, it’s asserted that the Cloud is already a common phenomenon, where 69% of Americans connected to the web use some type of “cloud service” including web-based e-mail or online data storage. Many companies are following this feverish movement and in the same article by The Economist, Irving Wladawsky-Berger compares it to the Cambrian explosion some 500m years ago, when the rate of evolution speeded up, in part because the cell had been perfected and standardised, allowing evolution to build more complex organisms.
Another indicator of this turning point is the Netbook. In part driven by a global economic downturn, the Netbook phenomenon might prove to be a long-lasting craze. Characterized as a lightweight, economical and energy-efficient laptop, especially suited for wireless communication and Internet access, this new mobile computer has been all over the news lately. In a recent article from Newsweek magazine, they uncovered a growing market trend in Japan, as more consumers are opting for netbook computers. While PC sales went down 4 percent on the fourth quarter of 2008 in Japan, sales of netbooks shot up 43 percent. The recession has been an important driver for this consumer shift, since people have become more sensitive to price, but the growth of cloud computing is its vital ingredient. There’s also an undeniable rational deduction behind this behavioral change. Many users start questioning themselves if they actually need all the bustling speed and storage, when their computers are mostly used for emailing and web browsing.
On the diagram shown above we can observe a series of laptops and netbooks linked to a central Cloud, which is in turn surrounded by a multiplicity of abstract devices. Many of these future devices will not require a vast processing capability, since they will work as rendering windows for the same online service - flowing incessantly through all of them. The role of “windows for services” might be what awaits many future mobile computers, including mobile phones. This points out to the growing value and significance of online services and applications, as the vital glue across many systems and platforms.
Predictions always feel like empty promises and there can be no certainty on what the future holds. Is the Netbook a predecessor of a future class of dumb terminals entirely dependent on the Cloud? Is Cloud Computing really going to be the next big thing? If so, how long will it last? Will it prove to be a long lasting shift or will people grow increasingly wary of their privacy and lack of ownership and return to a similar model as we have today, and in the process instigate a fourth cycle of data centrality?
Posted: February 8th, 2010 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
In the end of January I was in Paris for a couple of events: a talk at ESAG - École Supérieure de Design, d’Art Graphique et d’Architecture Intérieure and one-week workshop at ENSAD - École Nationale Supérieure des Arts Décoratifs. The lecture at ESAG was great and the lengthy discussion that followed, with the packed audience of inquisitive students, was extremely involving.
The workshop at ENSAD was a longer and very fruitful engagement, part of IDN: Identité numérique mobile (DMI: Digital Mobile Identities), a new program of ENSAD Lab - a research unit for creation and innovation gathering graduate research students and professionals to collaborate and discuss on the contemporary challenges of design. Lead by Remy Bourganel and Etienne Mineu, IDN is meant to investigate the flows, the emerging patterns and representations that qualify a new digital mobile identity. In this context, students at the workshop explored different ways of analyzing and visualizing different social dimensions relevant to them. Some of their initial studies can be seen here.
Posted: December 8th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 6 Comments »
*UPDATE* - The UK Met Office data was made available today: http://bit.ly/7mWJbx (scroll to the bottom) or direct link: All.zip (3.7MB). Here’s the corresponding station codes per country.
All our eyes are now set in Copenhagen, in what’s in my view one of the most important meetings ever held. Following the overhyped data fraud scandal, which is being targeted by many skeptics as the “Climategate”, the UK Met Office decided to make available the data for more than 1,000 weather stations from across the world, in order to hush divergent voices. The dataset, to be released this week, is the subset of stations evenly distributed across the globe and provides a “fair representation of changes in mean temperature on a global scale over land”, said the Met Office in a statement. “We are confident this subset will show that global average land temperatures have risen over the last 150 years.”
The data has not yet been made public, but once it does I will update this post. In case you cannot wait for this dataset, the group of scientists at RealClimate.org have recently put together a cohesive list of datasources, from innumerous satellites and stations, on sea levels, sea temperature, surface temperature, aerosols, greenhouse gases, and many more. In a blog post announcing the list, the group states:
Much of the discussion in recent days has been motivated by the idea that climate science is somehow unfairly restricting access to raw data upon which scientific conclusions are based. This is a powerful meme and one that has clear resonance far beyond the people who are actually interested in analysing data themselves. However, many of the people raising this issue are not aware of what and how much data is actually available.
This represents a great momentum for all of us involved in Visualization at large to be part of the solution and deliver a clear unequivocal view on what’s happening with our planet. Regardless of how you label your practice, Information Visualization, Data Visualization, Information Design, Visual Analytics, or Information Graphics, this is ultimately a call for everyone dealing with the communication of information for human reasoning. Let’s roll up our sleeves!
Posted: December 8th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | No Comments »
As some of you might have noticed I’ve been away for a while, so I just wanted to give you a short update on my whereabouts. I got married in the end of October and had a fantastic honeymoon in Asia. After arriving in mid-November I was in Lisbon PT for a talk at a conference organized by the Society for News Design, and later in Sheffield UK for a talk at the School of Architecture (SUAS). More recently I was with Santiago Ortiz, Aaron Koblin, Ben Cerveney, Jose Luis de Vicente, and an amazing group of people at Visualizar’09, in MediaLab Prado, Madrid. The workshop went great and I had a really good time. Apart from all of the above, I’ve been busy with this (an update to follow shortly).
Posted: September 24th, 2009 | Author: Manuel Lima | Filed under: Uncategorized | 3 Comments »
On a recent review of the VC database I was simply astounded with the amount of dead links in a variety of indexed projects. Worst of all was that some became completely untraceable, possibly gone forever. This was an exasperating moment. VisualComplexity.com, regardless of how insignificant it might seem in the big scheme of things, is still a compact archive of an epoch, showcasing tendencies, methods, discoveries, and fragmented insights into the modus operandi of our contemporary society. For many people searching for those lost projects, VC is not a curated starting point, but a frustrating dead end, leaving them with a slightly bitter taste in their mouth. Sure, some authors could be more organized and concerned with the documentation of their projects, but that still wouldn’t solve the issue. The main drawback we are dealing with is the inherent medium.
At the present time, we have access to countless cuneiform documents, including economic records, letters, and literary works from early Sumerian times, produced over 4,000 years ago. Many of these artifacts are essential to our understanding of the values and practices that shaped this ancient culture. Can we aspire the same longevity for our modern cultural artifacts? Most certainly not. We would be lucky if a tiny percentage of our documents lasted even a fraction of that time scale. We are so infatuated with our digital virtuosity that we are blind to its ephemeral nature. It’s curious how at this stage in civilization, when we are collecting more data like never before, in quantities that would astonish any nineteen-century researcher, we are storing it in one of the most fragile and volatile mediums, if and when we store it at all.
Yes, initiatives such as the Internet Archive are critical, but still remarkably far away from any realistic aspiration. In a captivating article by The Wall Street Journal, journalist Robert Hotz explains how “Scientists who collaborate via email, Google, YouTube, Flickr and Facebook are leaving fewer paper trails, while the information technologies that do document their accomplishments can be incomprehensible to other researchers and historians trying to read them.” As we communicate through more and more channels, our trail becomes thinner and thinner. And as time passes by, our chances of recovering precious records become ever so diminute.
Hotz provides an illustrative case on this critical challenge. When the leading evolutionary biologist William Donald Hamilton died in 2000, the British Library received a pile of his research papers, together with letters, drafts and lab notes. Among these documents were 26 cartons containing “vintage floppy computer disks, reels of 9-track magnetic tape, stacks of 80-column punch cards, optical storage cards and punched paper tapes”, some dating back to the 1960s. In order to extract many of the crucial stored information, “that could illuminate an influential life of science”, researchers at the Library had to arduously assemble a “collection of vintage computers, old tape drives and forensic data-recovery devices in a locked library sub-basement.”
I found this account extremely alarming and unsettling, particularly since it addresses a mere 40 year gap. Forty years! Now imagine the difficult task of historians in 400 years from now. We can do more and we have to. Otherwise, we run the risk of becoming a memoryless generation, or even worse, the dark digital age.