Archives for Author: Trevor Haldenby

100% Organic Content

When I started my guest editorship of this blog almost a year ago, it was with a question about the relationship between tools and creations. Will new ways of doing things affect what it is that we do? To try and flesh out my feelings on that question, I’ve peeked into a number of fields where innovation is intense; from documentary filmmaking to game design, architecture to (my) marriage.

But there’s one branching network of innovation that interests me so much that it’s found a home in the roots of a number of projects I’m working on. It’s the realm of biotechnology, and the disruptions it could bring to the stories we tell and how we tell them are significant.

In his book Sketching User ExperiencesBill Buxton includes a number of figures exploring the ways in which various technologies have evolved: from university or military research projects, into skunkworks industrial concerns, then into consumer products and (sometimes) multi-billion dollar success stories. From the mouse to advanced gesture recognition to the Internet itself, numerous information technologies widespread today took twenty or thirty years to make any inroads in the mass market, and have enjoyed lifespans as long as the average human’s. While it’s possible we’re projecting our mortal uneasiness on to our digital creations, the more obvious implication (that Buxton also arrives at) is that getting things “right”, or even “OK”, takes an awfully long time.

See how long it’s taken the Human Genome Project and its forebears to gain steam? See how long it took gesture-based interface technology? See bright things in your future / bloodstream?

From Bill Buxton's book Sketching User Experiences, all rights reserved

If we keep this lag time in technological adoption in mind, perhaps looking to the nearest experimental physics laboratory for design inspiration isn’t the way to go… instead, we should be looking to the steaming pot of technological innovations from the last forty years, and trying to figure out what is right on the brink of boiling.

Time and time again, when I sit down and play this “Where’s Waldo” game, what I can’t help but getting excited about biotech – design and engineering endeavours bringing living systems and advanced information technology together. I’m certainly not alone – a 2005 report by the US National Science Foundation highlighted the ways in which so-called NanoBioInfoCogno revolutions could transform the world’s industries and societies rather extensively. There’s even an essay by Newt Gingrich sandwiched in the middle, if you’re feeling naughty.

What door do experience designers use to enter this party? Aren't soft skills a critical part of this convergence?

But while the technical breakthroughs and complex technical underpinnings of those transformations are exciting, what I find most interesting are the questions to do with products and services. And not just the economic and business model questions – the experience design questions, as well. What new stories will NBIC innovations allow us to tell? With what new tools will we weave them? Will contemporary models of interaction apply between form and content, or will we witness the mass emergence of invasive species and new food chains in our media ecology?

Should society take the red pill, or the blue pill?

In design fiction, the practice of creating artifacts and experiences from future hypothetical scenarios, biotechnology already runs rampant. Particularly in science fiction cinema and literature, it’s hard to put down your glass without spilling someone else’s cocktail of personalized medicine (with a wetware wedge). Paul Di Filippo refers to the domain as RiboFunk in his writing. Inception, The Matrix, A Scanner Darkly, Children of Men... these films and many more all have their narratives woven finely through a lattice of biotechnological products. In gaming the situation is similar, from Assassin’s Creed to Deus Ex (both intellectual properties developed in Canada… perhaps this is our thing?) Björk’s collaboration with Scott Snibbe, Biophilia, is an interesting new kind of cellular entertainment in a number of senses. The subject matter is even beginning to pervade the meatspace… I’m in the midst of building out an experience called ZED – it’s a transmedia biotech role-playing adventure that will unfold across Toronto over the next eight months, letting audiences take serious bites out of a story exploring the darker sides of our potential future.

If these are the stories exploring a world of ubiquitous biotechnology, what new tools might emerge to tell them? Some say those tools are technological – Rohit Talwar of the foresight consultancy Fast Future suggests that by administering narcotics and nootropics activated by electromagnetic stimulation, the DJ’s of the future could take us into personally curated altered states… with the requisite $20 cover, of course. The BioArt work of Steve Kurtz famously (and disturbingly) landed him an audience with threats of bioterrorism charges, setting an intimidating precedent for the use of living materials and systems in artwork exploring our relationship with technology. Nevertheless, I’m looking forward to the MakerBot Replicator of 2024… there’s nothing like printing a custom OLED slime mold  and enjoying interactive TV on all the walls (and ceilings) of your house.

On the other hand, perhaps the tools that most directly steer our relationship with biotechnology in the future will be our values. People have been using “organic biotech” drawn (and sometimes domesticated) from nature for the purpose of inducing altered states of awareness and interaction with information for thousands of years. The posthumanist movement adheres to the value of updating our neurological firmware for better adaptability to change, and on the conceptual level, it’s difficult to identify significant distance between their philosophy and that of an experienced shaman, or field ethnobotanist. That said, if the form of our future content creation suites is to resemble a patch, tab, or injection; we’ve got some serious social issues to address… at the speed of technological innovation.

When it comes to biotechnology, industrially or socially, there is very little that isn’t still up in the air. The industry is only now becoming profitable; and in the coming years leaders, policy-makers, and just about everyone else will face numerous opportunities to interrogate emerging technologies and make decisions about their use. But as the DIYBio movement also surges, traditional stakeholders from industry and the media alike run the risk of being overwhelmed by a rogue wave of indie biotech experience designers. This is important to keep in mind – as a civilization we aren’t always very good at cross-generational and open-minded discourse on ethnopharmacology, as Richard Branson has been pointing out quite frequently of late.

Generally not a great thing to see at a biotechnology convention.

Of all the ways to think about what could result from the rapid adoption of biotechnology into niches currently filled by established material and information technologies, the use of stories strikes me as the most interesting. Films, games, and other narrative media act like mirrors trained on our social values and behaviours as much as bacterial growth on the surface of new gear. There was an issue of HorizonZero in 2005 that really got me thinking about how we treat this topic, it’s still a great read.

While biotech may ultimately be responsible for reshaping how we tell stories and design experiences, I think it’s just as likely that the ways we reflect on biotechnological themes and subject matter today will shape the form these disruptive and revolutionary industries eventually take. The best way to consider the relationship between form and content is as a feedback loop: it’s never too late to get in early, and no matter how far you push the boundaries, you’ll never be done.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | |

Happy Winternet

Photo courtesy of Adam Foster on Flickr

With the seasonal holidays upon us – from the Solstice to Christmas, Hanukkah to Kwanzaa – I set out to write a post exploring what these darkest of days in the Northern Hemisphere mean to me as an individual with technology on the brain. What I learned is that the holidays, for all of their affiliations with altruistic acts of kindness, are about as digital as the special effects dominating each December’s blockbuster cinematic offering. Focusing in on the systems and assorted trappings associated with the Christmas holiday I celebrate each year, it’s rather difficult indeed to separate the mistletoe from the machine.

The most notorious of connections between the winter holidays and technology are, blissfully, already nearly a month behind us. Black Friday and the following Cyber Monday, some of the most significant days of the year for the technology-infatuated, are attached to more than one quarter of the personal spending that takes place in the United States each year. How much of that spending is allotted to game consoles, iPads, and DIY kits for 3D printers remains a mystery to me, but after scoping out PDF flyers for ThinkGeek, Amazon, and the Apple Store it’s hard to deny the weekend’s economic significance. These days, trumpeting the arrival of the holidays a month before Christmas is hardly extreme – if you’re a Snoopy fan from my generation or younger, you may not even see the irony in It’s The Easter Beagle, Charlie Brown‘s shopping centre, decked out with holly by early April. The phenomenon of Christmas Creep appears to be gaining steam.

But the holidays aren’t (supposed to be) all about mindless shopping. Just as growing numbers of businesses are trying to find ways to measure the positive social impact of their innovations, growing numbers of consumers are turning to alternative giving approaches; and personalized donations to not-for-profit organizations, charities, and aggregators like CanadaHelps. It’s not hard to imagine a future where contributions to the Salvation Army are offered by dinging one’s phone against a ubiquitous (PRODUCT) RED bell, completing a transaction through the postmodern miracle of near-field computing.

In the meantime, we have a rather serious holiday reality to face in my household… and it has to do with the coniferous “plant” inhabiting our living room. While life cycle assessments of artificial trees from the Pearl River Delta versus the kind taking up over 120 square kilometres of Ontario’s bucolic landscape do seem to indicate that a PVC tree is a more ecologically responsible choice than a farmed Douglas Fir, it takes about 20 years for the plastic version to break even. New fads from fiber-optics to holographic mylar branch treatments complicate any analysis of what is presently the planet’s first human-made invasive species, but perhaps the Christmas tree’s future is bright. A global team of researchers published a report earlier this month suggesting the feasibility of an artificial leaf that could be twice as good at doing photosynthesis as what you’ve been slinging from your eavestroughs and bagging for the last two months. Who will need giftcards or presents when the seasonal decor of Canada’s snowed-in homes can scrub carbon dioxide out of the air as fast as a tropical rainforest?

As cultures have hybridized and local economies globalized over the past few hundred years, one of the touchiest issues raised has been how to refer to the diversity of world holidays celebrated during our winter months. The city of Birmingham proposed “Winterval” for a few years, one of the writers of Seinfeld introduced us to the concept of “Festivus”, and affiliates of the Pastafarian movement (praise be to the Flying Spaghetti Monster’s noodly appendages) have settled on the to-the-point “HOLIDAY.” But for a secular and technology-loving guy like me, none of these titles really fit the bill.

In hopes that a look at the winter holidays through the lens of personal technology would inspire me to come up with a new name, away to the Windows (I mean, Mac OS) I flew like a Flash (I mean, HTML5 canvas). What I learned stunned me: Did you know that the first public client-server communication over the Internet was conducted on December 25th, 1990? That makes this Christmas the World Wide Web’s 21st birthday… perhaps it should be the one cracking open the rum and egg nog, not me. But the connections don’t end there – Claude Chappe, creator of a semaphore system referred to as the first “mechanical Internet”, was born on the 25th of this month in 1763. 180 years later, so was Rick Berman, (notorious) producer of  Star Trek series that inspired so many in terms of gadgetry. The same date marks the birth of Clara Barton, a patent clerk responsible for the organization of the American Red Cross. If the contemporary holiday season is characterized by contrasts between economic, technological, and altruistic factors, then I think these achievements and figures are as relevant as any historical precedent or astrological phenomena.

Since I was a child, I’ve been told that the real value of the holidays is an opportunity to step back and consider the opportunities I have, and how to pay the benefits I enjoy forward to others. Perhaps the world’s technology conglomerates, bestowed with so much opportunity and wealth, could take such a philosophy to heart. Rather than soaking up the revenues of yet another Cyber Monday, imagine a holiday when the largest tech innovators practiced a unique version of “alternative giving”, donating one or two of their most closely-held patents to the public domain each year.

Until then, I’ll continue to practice a modified version of a childhood Christmas ritual… leaving a plate of $10 and $20 bills out with a glass of milk for Jimmy Wales, and the rest of the elves at the Wikimedia Foundation.

Thanks for reading, have a Happy Winternet! If tossing CFC Media Lab staff at various flimsy structures for kicks is more your thing, then check out Angry Elves, the Lab’s tongue-in-cheek gift to you this holiday season!

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing his MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | |



Two months ago, and after years of what some have called hyper-careful consideration, I proposed to my girlfriend. While I’d like to say that I’ve been caught up in a rose-coloured world of emotional contemplation since then, as you might have guessed, I’ve been thinking quite a bit about technology.

How does the gear we use to underpin the most sacred and emotionally charged moments of our lives wind up impacting the experience? What new tools might help us visualize and conceptualize not only “the big day” itself, but the weeks beforehand and the decades on the other side? From invitations to receiving lines, many aspects of the experience ahead of my fiancee and I stand to be influenced significantly by technological tools.

Let’s start with the question itself. While it seems like a big enough moment on its own, my marriage proposal was actually a reflection of years of thought and deliberation. After I’d picked out a ring (and learned everything I could from the Blue Nile iOS app), I faced the challenge of trying to represent all of the thought I’d put into the topic in a clean and concise manner. I’ve got quite a reputation as a head-in-the-clouds rambler, so I chose to lean on a method gleaned from my ongoing studies in strategic foresight to seal the deal. (Note: if you have a reputation as a head-in-the-clouds rambler, articulating your marriage proposal in the language of scenario-driven matrices may not be a welcome breakthrough) By mapping our visions of life together on to four quadrants of a matrix, I felt I’d set the table for a “Yes!” without confusing my fiancee much more than I usually do. However, worried about the reaction that my presentation of marriage-as-learning-organization might elicit, I decided to ground things firmly in humanity by planting my proposal as a typically gangly foresight acronym within the exercise. Let’s just say that she was just as thrilled by the W.I.L.L.U. approach to lifestyle strategic planning as I had hoped.

With a ring on my partner’s finger and family members across the globe notified efficiently and economically through VOIP and videochat, we faced that moment of peril that all young couples encounter – updating our social media statuses. What hashtag would we use to loose news of our decision upon our networks? After a full day of stressing over the issue (having decided that buying URL’s for our names and +1‘ing each other was a bit geeky) we decided to lean on Facebook as our platform of choice. We figured that changing our relationship status would send a big enough ripple out to girlfriends and old buddies hunched over emotional seismographs that we could avoid information architecting a complex online campaign entirely. Fifteen minutes, a few hundred comments, and 50% of our iPhone batteries later, this suspicion was confirmed. I have a feeling that Facebook routes upper-echelon upward changes in relationship status to the top of everyone’s news feed, but I haven’t had enough of a peek behind their algorithmic curtain to be sure.

We were so caught up in the deafeningly positive response to our status change that it wasn’t until weeks later that we realized that there were a number of people on the other side of the digital divide that we’d excluded from our news entirely. Some of my oldest friends, who use social media to push information but not to consume it, didn’t find out for weeks that I’d finally popped the question. As news of major life decisions goes increasingly electronic, will those with less than monogamous relationships with one or more social networks be cast further and further out of the loop?

Even with all the uncertainty and informality associated with online communication, when the time comes to request the presence of our best and dearest at the main event, we’re probably going to keep it digital. Invitations are an expensive and time-consuming part of the wedding planning process, and while I’d like to toss the contract to some talented artist I find on Etsy or Kickstarter, I’m more likely to try and micromanage the process myself on Eventbrite. It goes without saying that this is a debatable time-saver – paper choices and recursive enveloping decisions may be replaced with new concerns ranging from email analytics to browser compatibility. It had never occurred to me that the Flash vs HTML5 debate might hit quite so close to home.

Once all of our guests have fished our invitation from their junkmail folder, and provided their blessing by whitelisting our address, the question of what to do with their online avatars arises. The latest fad at conferences and seminars is to plaster the venue with projected Twitter walls, enabling guests to liveblog proceedings and point out factual errors in presentations… but I’m not sure that I want to wedge a similar technology into my wedding. Just imagining “Did u know that @trevver was 2 nervous to follow up for a #seconddate?” forty feet wide above our head table is enough to make me uneasy.

But the logistical questions around technology at our wedding don’t end there. Should we have a second ceremony in Second Life to accommodate members of the wedding party who couldn’t use Kayak to book a flight in time? Will guests be interested in contributing to a tag cloud, or a data visualization project hosted by Hans Rosling, and dedicated to linking the events of our relationship with global birthrates and economic crises? Are people building apps for their wedding itinerary, and if so, should we scope Playbook development into our project plan given RIM’s recent financial performance?

Even our clothes and accessories won’t exist outside of technological consideration – nothing says “I do” like a ring with a built-in USB flash drive, or a dress featuring LED’s that fade in and out in relation to our accelerating heartbeats. Wearable technology could become the next big thing under the altar, just as branching interactive cinematic experiences might begin to replace the photo slideshows of yore. Judging by the number of embarrassing videos of yours truly stored on my fiancee’s smartphone, I could be in real trouble.

While there aren’t very many things in my life that exist outside of some technological context, I assumed very briefly that the ceremony of marriage might be one of them. It’s been fascinating in these last several weeks to think about how weddings, among the most deeply human events of our lives, are shaped profoundly by technology… right up until the moment where we say “iDo.”

Languages and their SQLs

Languages are the vessels of culture and history. Unfortunately, many of them are also going extinct at an alarming rate. As English, Mandarin Chinese, Spanish and a handful of other languages dominate the globe, ancient tongues are often unable to compete for a userbase of fluent speakers; leaving them vulnerable to some of the same selective pressures that might unseat the inhabitants of an ecological system.

The organization Ethnologue estimates that there are approximately 6,900 languages in use today across the globe. Many scholars argue that the actual number could be much smaller than that, given that many languages are closely related to one another, and are spoken fluently by very few people. Michael Krauss (a linguist and language documentation advocate) and UNESCO estimate that up to 80% of global languages are at risk of extinction in the next 50-100 years.

Patricia Ryan on what we miss when we insist on English.

But while the languages we humans have spoken for thousands of years are experiencing a global crisis not unlike the one facing most of our mammalian cousins, there is one linguistic taxon that is experiencing a period of growth and prosperity – computer languages.

In the same period of time that most of the world’s languages have become classified as endangered, more than 8,000 computer languages have come into existence. The exact number is difficult to determine, though the Programming Language Popularity project and Transparent Language Popularity Index are attempting an index. Many computer languages are close cousins, and a few dominate the landscape on the shoulders of others, but it’s interesting to note that as we move towards a real-world environment of linguistic homogeneity, we’re experiencing an explosion in virtual linguistic diversity.

From a great history of computer languages in Wired, at the DigiBarn

But for me, here’s where things get puzzling. While I’m an English speaker (the so-called language of the Internet), an elder digital native (an evolution of Mark Prensky’s term), and a creative type when it comes to all things technological, I don’t speak a single programming language – certainly not anything like Glagol, anyway. How could this be? How could someone raised on computers, employed and engaged in the interactive industry, and pretty geeky by many standards, not be able to program?

I said that I can’t program… but depending on your definition of the term, that’s not entirely accurate. SQL is not a language I speak (how many programmers do “speak” in their tongue?), though I do read and write HTML, and I have been known to dabble in simple JavaScript. I’ve never written a graphics driver, but I do muck around with some frequency in software suites like Max/MSP and Quartz Composer. My introduction to “app development” was with HyperCard, and Macromedia Director, not Objective C.

If computer literacy is defined as familiarity; perhaps acquired through rote learning, with a set of specific tasks and workflows; and computer fluency is defined as the ability to apply advanced concepts about the functionality of a computer and its languages to the solution of problems; then I have a real problem when it comes to classifying myself. I’ve always felt comfortable in front of a computer, particularly when I don’t know exactly how to approach the problem before me, or where an application of critical design thinking is just the ticket to help identify a solution. You could say that I’m conceptually high-functioning, and essentially literate, but can one be classified as fluent without a mastery of the basic skills underpinning the whole Wizard of Oz show?

The tools that I learned to “program” interactive experiences on were tremendously eye-opening, but rather limited when it came to compatibility. Flash and Director relied on proprietary plug-ins to play back content from a CD-ROM, or (gasp!) in a browser window. At the same time, the HTML standard managed by the World Wide Web Consortium promised increased compatibility… but fewer flashy features. No tweening, translucency, or motion graphics for you, open standards zealots.

The Wilderness Downtown: Digital Natives' Citizen Kane.

HTML5, the most recent version of the W3C’s standard, is bringing much of the glitz of Flash and Director back into the browser… and in a way that ensures broad compatibility without the restrictions of proprietary technology. If you’ve seen Arcade Fire and Google (and Chris Milk’s) collaboration, The Wilderness Downtown, you’ve got a good idea of what HTML5 enables. That said, I suspect that most people are familiar with HTML5 and its potential only because of Apple’s famous decision to “ban” (the oft-delayed and resource-flogging) Flash from its iOS devices.

Perhaps designed to cater to people like me (high-functioning conceptual creatives who flunked out of math class), a new and interesting generation of production tools is emerging that promises to address issues of compatibility and ease-of-use simultaneously.

Applications like Tumult’s Hype (and Adobe’s Edge) are offering the ability to author slick HTML5 experiences through an accessible interface… it’s what iWeb probably should have been on day one (and probably will be within the next few years) – a web-app creator for the rest of us.

Macromedia's Director vs. Tumult's Hype... interaction design for the rest of us?

While these tools are incredible, offering users with an understanding of design patterns rather than code the ability to churn out new content and product, they’re troubling in that they rely on increased computer literacy without offering much in the way of enhancing fluency. They offer enormous numbers of people previously not capable of authoring interactive content experiences the opportunity to do so, with the caveat that they need not learn much about what’s under the hood in the process.

What might the long-term effects be of offering users a language of concepts and design patterns without an underlying vocabulary? Must all designers, programmers, or architects be linguists in order to produce meaningful and innovative work? What connections might exist between computer literacy and fluency, and the bigger social picture? In the United States, the level of a person’s functional literacy can be roughly correlated with income level and risks associated with committing a crime. Are technophiles like me, managing high-level semantic fluency with few syntactic skills, an at-risk group for media piracy or affiliation with Anonymous?

In the real world, we are witnessing a struggle to preserve languages in the face of an almost virally expansive linguistic homogeneity. What makes us think that the emergence of a dominant digital language (even if it’s an open standard) will enable an equitable representation of ideas, views, and other information in its realm? Perhaps the legacy of projects like the Long Now Foundation’s Rosetta will be to digitize languages on the brink of extinction today… so that their unique attributes might be analyzed and adopted into the computer languages of tomorrow.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing his MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | |


Hylozoic Tools

Standing beneath the work of Philip Beesley Architect Inc., keeping your jaw closed and eyes on where you’re walking are tall orders. Delicate and yet aggressive structures of webs and skeletons ascend above you, colourful mixtures of wet chemicals swishing and condensing throughout. The experience is not unlike drifting through a coral reef, except that the reef floats above rather than unfurls below… kelp forests might be a better comparison. A diversity of strange forms abound, resembling creatures that would look at home in the ocean’s hyper-pressurized depths. The interplay between phyla and species makes it difficult at first to contextualize Beesley’s pieces in human terms, and yet they are decidedly technological… mechanical mutations thriving at the fringe of some emergent ecological niche.

Sargasso, on display during Luminato at Brookfield Place

After an afternoon spent at Sargasso (the architect’s most recent Canadian installation, and a highlight of the 2011 Luminato Festival) I had to learn more about the motivations behind this exciting new work, as well as the tools required to realize it. In a series of conversations with Philip Beesley and Rob Gorbet, one of the architect’s closest collaborators, I learned about both, as well as how innovative approaches to architecture and interactive storytelling might have more in common than I had presumed.

Philip Beesley’s work has parallel foundations in professional architecture and contemporary sculpture, and he sees the collision of these circles as a valuable foundation upon which to ask questions about the nature of interrelated systems. Rob Gorbet teaches electrical, computer, and mechanical engineering at the University of Waterloo; but one of his favourite educational efforts is a course pairing sculpture students with upper-year engineers to explore models for collaboration. Beesley and Gorbet’s collaboration is therefore not only an opportunity for artist to engage engineer; it’s an opportunity to compare strategies for sharing across disciplines, vocabularies, and metrics of success.

Sargasso, on display during Luminato at Brookfield Place

In our discussions, Gorbet referred to the “consultant versus collaborator” choice often presented to engineers working with artists. While he tends to begin his working relationships with artists as the former, he prefers the latter for its emphasis on “Why?” questions and mind-bending learning experiences. I’m in alignment – my personal experience has been that competition within silos or fields tends to encourage excellence, while collaboration between disciplines more often rewards interestingness. Beesley articulates his motivation for working with outsiders in a slightly different way – it’s an opportunity to ensure genuine incompletion, and to encourage disjunctive mutations within a suitably resilient environment. Whether in biology or collaborative creativity, productive mismatches can lead to amazing innovations.

The twitching and cascading Hylozoic series of installations have evolved through an iterative design and prototyping process into a fascinating diversity of forms. In order to realize their vision and keep up with a self-imposed evolutionary arms race of innovation, Beesley and Gorbet have had to design just as exotic an array of production tools. The materials used in the construction of the Hylozoic series, for instance, range from hyperbolic meshwork scaffolds to shape-memory alloy actuators; electrogalvanic bladders to components fabricated from biodegradable cellulose.

Rob Gorbet and Jonathan Lau working at Philip Beesley Architect Inc.

But the new tools being utilized to solve problems at PBAI aren’t all material. A sharp young engineer named Jonathan Lau is collaborating with Beesley and Gorbet on the development of a simulator that will capture data from the firm’s various physical installations; providing the opportunity to analyze specific behaviours after-the-fact, or from thousands of kilometres away. This is important because one of the most interesting attributes of Beesley’s works is the way in which they respond to the physical environments in which they are situated. The space in which any work of art is installed will shape it physically and situate it contextually over the course of its lifespan; but Hylozoic Soil physically responds to individual users engaging it through a network of whiskers and fronds, altering its overall systemic behaviour in response to the sum of those interactions over time.

While a work of architecture that responds to user interaction is amazing enough, that response is really only as good as the emotions it evokes. Over lunch one day, Gorbet and I discussed the work of Paul Ekman; which suggests that although physical expressions are tied to a diversity of underlying emotions across human cultures, a few basic emotions (anger, disgust, fear, happiness, sadness, and surprise) are associated with certain facial expressions almost universally. In order to convey emotional expressions outside of that basic set in high-tech installations, work is underway at PBAI to motion capture Beesley’s physical interpretations of concepts like grace, and elegance. By capturing and quantifying the architect’s various physical expressions of an emotion, behaviours can be designed and uploaded to his installations that, while not necessarily universally understandable, bring the pieces to life in new and exciting ways.

Sargasso and development on the Hylozoic Series

After engaging Beesley and Gorbet in conversation, and spending an afternoon at the PBAI studio in Toronto’s west end, I felt as though I’d had my mind opened up in a dozen new directions… but once I sat down to write this piece, I began to wonder why these thinkers’ astute reflections on technique and motivation seemed so familiar to me. I realized that it was because “good” experimental architecture and “good” digital storytelling have more in common than I’d assumed: risky collaborations between creative types from different fields seem to be behind the most interesting works in both realms. So do deep conceptual and technical investigations into the quantification and translation of emotional experience. Same with the design of new tools for concocting and measuring meaningful interaction.

I thought back to a conversation I’d had with Philip Beesley about his work, and how he’d tackled the creation of something emphatically new and experimental that would nonetheless need to resonate with audiences lacking a formal context for approaching it – a potential problem for architects and interactive storytellers alike. His feeling was that art culture, with its exclusive and sophisticated knowledge; is often positioned against mass culture, seen as having brutal or reductive qualities… but that this need not be the case. Beesley’s philosophy is that the appeal of collective experience runs so deep that perhaps the primary pursuit of creators across cultural, critical, and formal boundaries should be collective delight. Curiosity as inseparable from interconnection. Life as inseparable from matter.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

The Future of Photography

When I was 16 years old, I got my first “real” camera, a Minolta X-700. A number of years ago, I tracked down a copy of the company’s production records and managed to locate my camera’s serial number within a production run during the summer of 1982… the same year and season in which I was born. While I’ve been a photographer ever since that summer for more reasons than Aperture has processing presets, I like to think that starting my habit with a tool that shared my personal history has been important amongst them.

During my university years, whilst buried in stacks of film theory and anthropology texts, not to mention an early digital point-and-shoot or two, the trusty X-700 never left my side. When I graduated, and decided to pursue adventure on the high seas of interactive storytelling at the CFC Media Lab, I bought my first non-film SLR; the Canon 300D Digital Rebel. The unit’s heft and imposing industrial design for a consumer product prompted more and more people to ask me on my walks about town if I was a professional shooter. Eventually I decided that it would be advantageous to start taking work, and giving a positive response to the question.

The first three cameras I owned.

Now, several years in, my tool of choice is Canon’s 7D DSLR. It’s a powerhouse that not only writes photos three times higher in resolution than my 300D with half the required light, but also shoots HD video with an aesthetic that can only be described as cinematic. My photography business has been growing steadily, with a roster of clients in the arts as well as the sciences, and for a few brief moments this year I felt as though I’d stabilized my relationship with my craft.…

My next three cameras... decidedly more pro, but less diverse in form.

That state of grace was short-lived. Since starting my Guest Editorship of this blog back in April, I’ve been thinking more and more about the relationships between new ideas, and new tools. While the cameras in my menagerie have grown more technically complex over the years, there have been few real paradigm shifts in terms of what they offered, outside of the context of video and RAW production workflows. All of the upgrades to my kit in the last decade were justified as representing increases in efficiency, and performance. Each new camera has also been more expensive than its predecessor; in terms of finances, posture, and general quality of life for my neck muscles.

After enduring significant and successive financial investments, not to mention the costs associated with confronting new operational process and practice, I am still taking (not to mention giving back) the same kind of photographs I’ve been capturing since I was a teenager.

With this realization in mind, I posed a challenge to myself several weeks back: if I could design a camera offering real departures from convention and usability paradigms, rather than incremental advances upon existing technologies and interfaces, how might the tool of my dreams wind up working… and what new creative possibilities might it open up?

1. Rich Imaging / RAW Composition

The first bit of functionality on my dream-list was something that I considered flat-out fantastical: the ability to tweak images in a post-production workflow not just in terms of exposure, contrast, and colour balance; but also in terms of focus and composition. Imagining a photographic deliverable that offered a near-infinite level of zoom, or a still image that facilitated interactive narrative experiences excited me tremendously.

2. Connected Imaging / Cloud Photography

While I’ve been a cloud-gazer since childhood, this feature doesn’t refer to “sky art.” A camera that made use of WiFi, GPS, Bluetooth, and whole families of ambient sensors could revolutionize photographic metadata. As the camera logged barometric pressure and position information to make recommendations about ideal snapshot opportunities, it could calculate white balance in the context of atmospheric scattering effects imperceptible to the human eye. The camera could even identify and tag faces or places for on-the-fly uploading, eliminating a rather-cumbersome portion of the social sharing experience. Storage products from Eye-Fi already perform some of these services, but imagining an innovative new visual computing platform rather than limited featuresets had me fascinated.

Proud of myself for challenging just a few of the conventions of my hobby and career path; I strolled over to my computer after the exercise, and checked my email. Imagine the look of surprise on my face, not to mention the amount of drool upon my keyboard, when I opened the message in my inbox from Lytro, a company I’d heard murmurs about months before.


This year, Lytro will debut the first light field camera for everyone. OK – you’re not everyone. You are a beautiful, unique snowflake. And you deserve an amazing camera that lets you capture life’s singular moments, like baby’s first steps not second, with maximum magic and minimum hassle. With a Lytro, you unleash the light.

The team at Lytro is completing the job of a century’s worth of theory and exploration about light fields. Lytro’s engineers and scientists have taken light fields out of the lab – miniaturizing a roomful of cameras tethered to a supercomputer and making it fit in your pocket

After retrieving my jaw from a pile of papers and Compactflash cards beneath the desk, I began scouring the precious few blog entries on Lytro’s site. Built on founder Ren Ng’s 2006 PhD thesis at Stanford’s Department of Philosophy, Lytro seems to have a singular (and yet open-ended) mission: to destabilize and disrupt the camera industry with a new type of product that doesn’t only photograph faster and with less fuss, but that actually enables an entirely new approach to imaging and image-perusal.

Going back to photographing "auras" with infrared-trickery isn't very appealing, is it?

Rather than capturing the flattened wall of photons colliding with the CCD or CMOS sensor of a “traditional” digital camera, Lytro’s camera will be based on much richer light field technology. The technical underpinnings are few and far between as of yet, but there are a few decent analogies that may help you wrap your head around the innovations associated with light field sensors.

In his thesis, Ng refers to how a music studio is designed to capture sound within an enclosed environment, but can accomplish this goal in a number of ways. The recording engineer can use a room mic to capture the guitars, drums, and vocals on a single track; or she can mic each of the instruments independently, combining them into a composite on the mixing board with a much greater degree of control and fidelity. Traditional film and digital cameras capture the sum total of light reaching their sensors (or film), translating it into an image with the help of the camera’s lens.

Light stream sensors, on the other hand, perform a task more akin to multitrack recording… light traveling along individual rays towards the camera is recorded on a ray-by-ray basis and stored. Light field technology brings a significant increase in computational overhead to photography, but the image data that is recorded can be reconfigured in any number of different ways. Want to build an image around the light bouncing off of a foreground subject? No problem… just prioritize that. Want to change focus after the fact so that you can see what that girl in the background is up to? Simply recompute the light ray information to solve for a different compositional variable.

If you got into photography to get out of math class, I have some very bad news.

In just a few short minutes; both of the items on my dream-list of camera functionality had been addressed, and both were going to be commercialized in one product before the end of this calendar year. The only sensible way to offer a camera with a light field sensor (requiring enormous amounts of data capture and processing) was to make that camera a cloud device, as well.

The creative possibilities of Lytro’s innovation are staggering – and I get the feeling that the surface hasn’t been scratched very deeply just yet. Images featuring unprecedented clarity of focus and depth-of-field will be possible with minimal ambient light levels, rather than the bright daytime sun associated with today’s f / 11 photographs. 3D images will become just another export option, rather than compromised deliverables requiring complicated workflows and dedicated hardware. Algorithms for eliminating noise and grain will be limited only by server-side processing power and network connectivity.

But what about the big paradigm shifts in terms of photographic storytelling? Just imagine the possibilities opened up when shooting stills or video on a camera that allows its images to be refocused and reframed not just by the operator, but by audiences as well! Lytro already hosts a gallery of demonstration images from its yet-unseen camera, but imagine what this kind of gear would be capable of in the hands of skilled visual storytellers with diverse backgrounds, compositional languages, and philosophies around user experience.

These two images (and many more) exist within a single Lytro photograph.

While the Lytro camera is still at least months away, even the information currently available online about its development is enough to make any photographer’s hairs stand on end… and they’re certainly not the only innovators in the game. Other companies and organizations of all sorts are also exploring new frontiers in the DMZ between artistry and technology: Calgary’s UrtheCast is launching the first space-based HD webcam next year, while a team at Cornell is using some of the same math as Lytro to fit a low-resolution camera on the head of a pin.

One of the reasons I began pursuing photography as a career was the social and professional opportunities the craft promised – you can easily justify tracking down and briefly interrogating (visually or intellectually) people fascinating work in just about any field imaginable. Photographs have come to dominate our nascent and global visual language, and everyone with a potential audience needs a photograph or two. I’ve had some of the most interesting and inspirational conversations of my life with a camera in one hand and flash in the other. But while the career provides amazing opportunities, it also comes with a few caveats… you only have so much control over the images you create (unless your reputation precedes you by a few kilometres) and the environments you craft them in; and when the shoot is over, you’re usually out the door post-haste.

For years I’ve been trying to figure out how to synthesize my interests and skills in the design of emergent technologies, and the capture and presentation of powerful images and stories. A company like Lytro, that has forged its mission out of technological and artistic innovation, reminded me of how crucial it is to work on the borders of these two camps. While a camera that was manufactured in your birth month is a worthy ally, a camera that resonates with your creative philosophy and passion for innovation is even more powerful of a tool.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.
@trevver / /


In 1922, Ray Cummings famously noted that “time is what keeps everything from happening at once.” 90-odd years later, it would appear that we have arrived at a confluence of opportunities to question Mr. Cummings observation – but pondering the nature and valuation of time is hardly something new.

For centuries, philosophers and scientists alike have argued about the nature – versus nurture – of time. For those in the Newtonian camp, time is a fundamental structure of the universe… but many thinkers since ye olde days of pomaceous violence have attempted to explain time as an intellectual or cultural structure.

Regardless of which side of the fence you’re on, examining the tools that we have devised over the years to “tell time” is a fascinating activity. These tools have helped us shape our scientific understanding of the world in which we have evolved; while also telling us a thing or two about why we, highly creative and social animals, value such understanding in the first place.

Clocks are the most obvious and ubiquitous of our time-telling tools, and they come in an appropriate diversity of forms. Solar clocks such as sundials, perhaps our earliest information technologies, have been around for millennia. Mechanical clocks have been en vogue (though not necessarily freeing our minds) since escapement mechanisms were devised in 11th century China. The first atomic clocks (built upon notions of Lord Kelvin’s from the 1870′s) made their appearance on the scene in the 1949, and have steadily advanced in accuracy and complexity ever since. In recent years, ubiquitous desktop, toolbar, and dashboard clocks (not to mention per-minute cellphone plans) have contributed to a culture motivated by minutiae.

But in spite of the contemporary clock’s impressive market penetration, brave chronologists all over the world are innovating new tools that diverge significantly from our current kit. Indeed, some of these new tools for measuring and contextualizing time have been designed specifically to supplant our current reasons for keeping an eye on the clock.

Just what does the Long Now Foundation mean by "Long Now?"

I’ve been a member of the San Francisco-based Long Now Foundation for several years. Established in 01996, The organization’s goal is to develop projects and programs (and amazing seminar series) that buffer cultural tendencies towards perpetual growth and acceleration, and that help people deepen their relationship with the so-called “big here” and “long now.” A peek at the Foundation’s list of board members should get you excited – Stewart Brand, Brian Eno, Kevin Kelly, and Peter Schwartz are all co-founders – but things get really interesting when you look into their projects.

At present, the Long Now Foundation is several years into work on a mammoth project called The Clock of The Long NowBuilt on an isolated and geologically stable bit of Jeff Bezos’ land in Texas, the Clock is a monument to resilience in the language of the very large, and very slow. Designed by Danny Hillis to solve a number of design challenges through forward-thinking innovation as well as archaeological learnings, the Clock is intended to tell reliable time for more than 10,000 years. If you go that far back in time, you find the origins of what make us modern: urban living, agriculture, and yes, time-telling tools. By prompting eventual visitors to the Clock to position their lives against the scale of civilization, not political terms or quarterly profits, the Long Now Foundation hopes to bring about social change by offering powerful experiences around ideas of resilience, responsibility, and awe. By recommending the addition of another decimal place to our presentation of calendar years, the Foundation even offers a convenient (and slightly less awe-inspiring) at-home experience.

Jeff Bezos and a component of the Clock of the Long Now (Photo: Jim Merithew/

The Clock of the Long Now may be the most conceptually audacious timepiece under construction at the moment, but it isn’t the largest. The clock atop the Abraj Al Bait Towers in Saudi Arabia will, when finished, take that title. Each of its four faces will have a diameter of over 151 feet, and as a result of its position atop the world’s second-tallest structure, people more than 25 kilometres away will apparently be able to use it to tell the time. Adam Barrows wrote an amazing article in the Boston Globe two weeks ago about this Mecca Clock, and some of the controversies it brings to the discussion on time. The clock deviates from established Universal Coordinated Time systems by 21 minutes, making powerful statements about humankind’s mastery of natural systems, and Saudi supremacy. As a number of fascinated chronologists have pointed out, you’re unlikely to get a look at this piece of work unless you happen to be a Muslim.

You should see how big the Mecca Clock's cuckoo is.

While architectural wonders across the globe seem to be the preferred approach to seeding discourse on our temporal situation, amazing leaps of understanding are taking place around the time-telling tools we (and countless other species) are born with. The equipment we use to experience the specious present is turning out to be more complex, and appropriately distributed, than previously imagined. Recent research seems to indicate that our perception of time is the result of complex systemic interaction between cerebral cortices, suprachiasmatic nuclei, and other neural networks. The deeper we look into our own timekeeping toolkits, the more it seems that Martin Heidegger was correct when insisting that “we are time.”

Time use research, mental chronometry, and time discipline are all emerging research fields dedicated specifically to understanding the human perception and valuation of time; and they all have significant implications for the design of our media.

Some of the most interesting questions for me are to do with how the tools we use to understand time, conceptual and physical, wind up shaping our media. If we were to categorize the menagerie of media spawned over the last 50 years into “Long” and “Short” forms, what might we learn? How many of those media present themselves, at least on the surface, as “long?” While many new media (and media tools) have encouraged shorter and shorter experiences in terms of narrative or direct interaction, they have also fostered enormous total durations when the entire userbase is considered. Television commercials have often been hailed as the end of the road for storytelling, but in terms of frequency and sheer number they are some of the hardiest media out there. By limiting users to 140-character breakthroughs (and memoranda of disinterest), Twitter has decreased the time required to engage in meaningful online interaction; but in terms of total content bandwidth and time spent, the service topples practically everything humanity has seen before.

Simultaneity to the extreme - Shanghai Web Designers' look at 60 seconds on the Web

Steve Jobs insisted during the development of the Macintosh that shaving seconds off of the boot-up sequence was saving lives – considering the millions of users that would be inconvenienced by wait time during the product’s lifecycle. Perhaps this aggregated and extrapolated way of thinking about time in design will be of greater and greater value in a world of bite-sized but massively populated media… I’d like to name the new framework sociochronometry.

The timeline interfaces utilized in software of all sorts represent another way of thinking about tools for time. Conceptually, the software industry has oscillated for years between timeline and system-driven metaphors for designing interactions and experiences. It would appear that both approaches are fit enough to co-exist in competition, with new species emerging all the time. While these mutations and experiments flourish in some contexts; there are occasional brilliant failures, as well (see Apple’s application of the liquid bins-and-clips iMovie interface to its reboot of the Final Cut Pro suite, and the subsequent media frenzy these past few weeks).

Ray Cummings: the man who mastered time, refugee from the past, or Princess of Mercury?

In a world where the all-balancing time that Ray Cummings knew seems to have fallen behind, and where everything indeed seems to be happening all at once, it’s hard to imagine physical, conceptual, or procedural tools more useful to us than those that prompt a reconsideration of our various relationships with the hours (and aeons, and yoctoseconds) of the day.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

For-Prophet Technology

A few weeks ago, I wrote about the codification of innovation at Apple Inc. While examining the management hierarchies and creative clusters at work there, I began thinking about how one could visualize the larger web of technological influence stemming from a single corporation, or product. The latest iPad may be developed in a frenzy of behind-close-doors iteration, but its successes in the market and as a technological icon are ultimately dependent on a system of connections much larger than the one in which it was crafted. Slick advertising campaigns, generations of preceding and descendant products, and the effort of countless third-party software developers have all played an enormous part in ensuring the successful uptake and understanding of Apple’s product, for example. But above all of the Geniuses, engineers, and creatives at TBWAChiatDay there is another node in the system that exerts tremendous influence: the tech journalists.

In Kim Vicente’s book The Human Factor, the development of technological products and services is presented as a process often characterized by vicious combat between “wizards” and humanists. Many episodes from 20th century design and innovation are presented in the book as cases of wizards-gone-wild… savants running amok in nuclear power plants, planes, and hospitals with little consideration for the human side of design. But while the history of consumer technology is certainly spotted with examples of design-for-design’s-sake and dangerously complicated over-functional products, it is also the tale of a unique set of relationships between wizards and the rest of us.

Alongside growth in consumer technologies in the 20th century came the growth of accessible and decipherable technology writing. Popular Science magazine was first published 139 years ago, and popular scientific writing (or some form of content production) seems to have a bright future in spite of recent trends towards science by press conference.

Click for the full image at The Joy of Tech

Journalists like Walt Mossberg, Michael Arrington, and Leo Laporte are the old-guard of consumer technology journalism; working for decades alongside industries that have grown from insignificant to economically dominant, and challenging the value and detailed specifications of products released. An article by Alan Deustchman in May 2004′s issue of Wired looks at the life and times of Mossberg himself, calling him “The Kingmaker”. Complete with a chart of economic repercussions from Mossberg’s Personal Technology column, the piece contains some fascinating insights into the journalist’s philosophy and process. Mossberg’s ex-assistant even wrote his Master’s thesis at Georgetown about Walt’s influence on global innovation.

Of course, Mossberg steadfastly denies any “special relationships” with the titans of Silicon Valleys around the world. While he does bestow the king’s crown when it comes to consumer technology corporations, he also has journalistic scruples to account for. Perhaps his recent stern response to Shantanu Narayen was as much about steering Adobe in the right direction as demonstrating Mossberg’s ethics…

Any suggestions as to who – or what – constitutes the new guard of technological journalism are fiercely debated. Stuart Miles of Pocket-lint just started curating a list of “technology voices that matter” on the Say 100 site, and apparently the feedback has been incendiary enough to prompt a “We ALSO Love:” sub-list below the fold, and frequent reshuffling of the 10 finalists.

Humans aside, the formats associated with accessible science and technology writing are (as you might expect) receiving a bit of a makeover, as well. As new technological developments are released and integrated into mainstream culture at a faster and faster pace, new evolutions are taking place in terms of the frame of reference we establish for our technological prophets. Last month, the 8th Conference on Innovation Journalism took place at Stanford University.

The IJ logo reminds me a bit too much of a Light Cycle match...

David Nordfors, the founder of the conference, explains innovation journalism as the evolution of investigative writing on technology. Nordfors argues that as advanced information technology systems become ubiquitous, the standards and conventions that we use to evaluate and make sense of them must also expand. Siloed approaches to journalism that pump out narrow analyses of local news, business, and science, he argues, are less reputable and less enjoyable than approaches resulting in integrated reflections on what are actually surprisingly integrated products. Tom Foremski reflects on the IJ8 Conference over at ZDNet, and Nordfors has a PDF online of a paper he co-authored on the interplay of journalism and innovation.

To kick off the conference, the Stanford Centre for Innovation and Communication asked what the role of journalism was in a world with over 5 billion cellphones, 600 million Facebook accounts, and one billion Google search queries per day. As traditional journalistic media become less and less functional as gatekeepers of information, what becomes of their superhero icons, the $1,000,000-salaried celebrity humanists keeping an eye on the wizards?

Walt Mossberg has seemingly already begun to prepare for the crumbling of media steeples around him – he founded the All Things Digital online publication and conference with journalist Kara Swisher back in 2007. Chris Anderson, whose Sapling Foundation administers and curates the TED conferences, is also a journalistic ex-pat. Both TED and AllThingsD are brands that bring technological wizards (and business-minded humanists) direct to audiences around the world, lowering the latency on the newest and wildest intellectual signals. But in spite of their apparent passion for cutting out the middleman, Mossberg and Anderson have both written themselves into their events – the former interviewing Steve Jobs AND Bill Gates in a famous 2007 panel; the latter greeting and shaking the hand of every TED presenter.

Kinda makes you wonder what hangs about Steve & Bill's mantelpieces, huh?

Has dusk come to the era of these techno-journalistic prophets? Have we finally reached a point in the codification of innovation and evolution of technology where translators are simply no longer required? Will hordes of lead-users emerge on the web, hungry to articulate the value and design of totally new product categories, just as they’ve taken to reviewing the incremental updates from RIM and HTC that show up at your local Best Buy?

For all of the intellectual and creative decentralization that AllThingsD and TED bring to the table, they’re also shrewdly managed by those who have been closest to the pulse of the information technology revolution of the last few decades. Perhaps, although their power has been distributed slightly, these prophets of technology are with us for good. Journalists represent a social tool that enable us to richly contextualize our physical and informational surroundings. If we want to keep in touch with generations of innovation advancing faster than generations of human beings, perhaps we’d best carefully consider the value of technology journalists; and their unique ability to pass on the wisdom of the wizards, whilst poking holes in their magic.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

An Open-and-Closed Case

It’s been a busy few weeks, here in Toronto.

Hot Docs and the the CFC’s own WSFF have both come and gone, bringing cosmopolitan crowds of storytellers to the city in its finest season. Subtle Technologies and Random Hacks of Kindness also wrapped up their mashings of art and science this past weekend. NXNE, just around the corner, promises an influx of cultural ideas and icons. IdeaCity murmurs sweet nothings to Toronto’s digerati of a Walt Mossberg / Margaret Atwood rap battle…

With all of these intellectual shindigs afoot; I’ve found myself thinking a great deal about the interactions between our city, the various confluences of ideas that constitute its pulse, and the technologies powerfully shaping our existence and discourse alike. As popular and academic writers delve deeper into the systems associated with innovation, collaboration, and discovery; more ideas surface that refer to our urban brains as networked, and our social networks as organisms.

Conveniently, this past Saturday I had a front-row seat at the always-fun Subtle Technologies festival for a panel discussion on the topic. CBC’s Dan Misener stirred a discussion between OCADU’s Sara Diamond, Mozilla’s Mark Surman, and BoingBoing’s Cory Doctorow; on the topic of how we might build a city that “thinks like the web.”

Back in 2006, I shot this photo of Cory and learned not to interview people in construction sites to thematically evoke open-source.

Early on, it became apparent that the conversation was going to swing “open”, widely. This shouldn’t have come as a surprise – Doctorow is a writer and curator who’s made a living (and sometime-cult vassalage) upon the spines of free books, Surman runs Mozilla as much like a research lab on open-ness as a software company, and Diamond is a multidisciplinary researcher and artist whose day-job involves elegantly catapulting Canada’s oldest and largest art school into the 21st century.

But while open source philosophy, design, and practice present a myriad of interesting processes and metaphors by which we can learn from the web in the intelligent development of our cities, some of the most interesting social disruptions of the web have been in terms of articulating the monetary value of information. Was a panel featuring three open-source advocates really the most objective approach to the topic? The argument for open source in terms of municipal (or provincial, or federal) governmental data is that information acquired through the use of taxpayer dollars is a public good. The benefits you can attain by bringing $100,000 datasets to everybody who wants them, for free, are surely significant. The UK organization mySociety has done an extensive job of rendering transparent the data and processes central to government’s operations, for example.

But as I often do at exciting multi-person panels and presentations, I found myself playing devil’s advocate.

I think Mark Surman is a great speaker, but unfortunately I have to do this for 20 minutes beforehand so I don't accidentally blurt anything.

Open source governmental data is one example of the web influencing the creation and inhabitation of cities… but what other possibilities exist? At the core of the growth of the web has been a diversity of innovative new approaches to the commercialization of information. A decade ago, few seemed to have any idea how to successfully sell and market digital music, and now the largest retailers of most media are largely digital. What lessons have we learned from micro-transaction business models for digital content that we could apply to governmental data, or statistics? What would be the downsides… and what might be the benefits?

The artistic and entrepreneurial opportunities of open data at a municipal level dominated the first half of the panel, and by the time we reached the end, I was having trouble justifying a question that trounced a few positions and case studies from the previous hour. There’s no doubt that the benefits of open-sourcing governmental data are significant, and that those benefits seem to blossom outwards as long as they themselves remain open(ish). Open data makes it easy for people to sell streetcar arrival apps (coming soon to a TTC bus near you), adopt puppies (apparently municipal datasets around humane society operations are some of the most popular in the Western world), and engage in high-tech GIS adventuring.

A poster dedicated to positive open-source citizenry and governance. Ooh-rah!

But there are benefits to a non-open approach to information (there’s got to be a better opposite for ‘open’ than closed… or locked… or bricked… argh). A few decades ago, Statistics Canada implemented a controversial new strategy – instead of classifying collected and collated information as a public good with zero strings attached, StatsCan would tie a few (strings) on and charge for some uses of that data as a significant cost recovery tactic. An interesting move, and one rather at-odds with the open source model that has emerged in the years since. Unfortunately, even at its peak the program wasn’t terribly successful – censuses are expensive – and little more than 4% of StatsCan revenue in the 1990′s came from the sales of products and services.

This image depicts the divergent ocular biology evolved by politicians to observe the world.

Tony Clement is back in the headlines this week, following up his Assault on the Longform (this will make a great science-fiction adventure movie if we can pull the wool over our kids’ eyes…) and Battle for Bandwidth with a strategy for increased user fees to offset a tax rate plateau. What might be the outcomes of a new micro-transactional approach to municipal data in a political and social climate of increased user fees? Free beer is all well and good, but it seems to me that charging for data that remains free-like-speech is also quite appealing.

What about a City App Store, where the keepers of the information can balance their (low overhead) books by selling maps of the plumbing beneath Yonge Street, or interface not only with the data but with all manner of Creative Commons-licensed visualizations thereof… I’d certainly be on board with paying for some of the city’s currently Open Data if it were packaged in a compelling experience design by some of Toronto’s best and brightest. San Francisco’s already been doing this for a few years with great uptake… if not (yet) revenues.

Open Data in this city is young, and perhaps even vulnerable to attack… Wouldn’t it make sense to create a profitable and self-sustaining office of Open Data, rather than one near-exclusively nurtured on funds subject to classification as “gravy” at a moment’s notice? Taking inspiration from GeoNames (the online database of over 7,500,000 geographical POI’s) perhaps the City could give unformatted and open-certified data away for free, while selling access to curated and contextualized data. The model of open data curation and sale is also being tweaked by SimpleGeo, a company that gives away the first 10,000 data interactions per day through a free API, but charges based on quantity above and beyond that. Do cities have the right to curate “vanilla” open data into sellable products and services? Should they?

If we’re going to design new ways for our cities and communities to work, we might as well strengthen and utilize our understanding of the technological systems that we’ve already set in motion… they may prove valuable as tools. It’s hard to imagine the wave of growth in open-source technology and uptake coming to a standstill… but it’s still worth considering how other contradictory innovations originating on the Internet and its surrounding cobwebs might well apply to the design of our cities. *


Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

* There’s no better place to think about this topic than at a *free* panel discussion… thanks, Subtle Technologies!

Header image from Tim Morris’ Flickr stream


Codifying Innovation at Apple

The creation of culture is about putting what’s inside on the outside. Some of the most inspirational figures in modern technology started off as dreamers, interested in exploring brave new technosocial paradigms by standing on the shoulders of cutting-edge ideas. But what happens when, a few decades into the game, it’s time to consider a succession plan?

In the pieces I’ve been writing here on <stabletalk>, I’ve been focusing on how the realization of new ideas often requires new toolkits. In many of my posts, I’ve referenced projects that run on hardware and software designed by Apple Inc., and it seemed like it was time to look at the company itself in the context of tools for innovation. I also figured that you were familiar enough with the aesthetics of Apple products, dear reader, to warrant the inclusion of some pun-tastic imagery.

How does a company like Apple, built upon pillars of design-driven innovation and countercultural business practice, approach the challenge of codifying and institutionalizing its values? While business analysts and cultural theoreticians concerned with the health of CEO Steve Jobs have been buzzing on the topic in the press, the codification of values and innovative process is already underway – inside and outside of Apple’s Cupertino campus.

Two years after my birthday, and last year: business innovation-as-usual.

I was born in 1982, in the midst of the Apple II era and the year when Steve Jobs was forced off the Lisa team and on to the Macintosh project. With a healthy interest in computers and technology, it made sense to me even in childhood that the social structures and institutions I would be drawn toward would explicitly concern themselves with the technologically defined aspects of our existence. While a generation above me argued about “Apple as a Religion” from the perspective of products and experiences they first encountered as adults, I think you really get to the heart of that discussion when talking to people who have experienced those products and experiences from childhood. Religion is a powerful force amongst converted adults… but it’s even more powerful when you get it from day one, right? A bit of research has even surfaced through the BBC noting similarities in neurological activity between people engaged in religious rituals and those engaged in the use / discussion / unboxing of Apple products.

While the Cult of Apple gets plenty of press, it’s also one of the ways in which Apple has indirectly (and that’s debatable) codified its values. This week marks the ten year anniversary of the company’s move into retail. It’s interesting to think about how the replacement of the Apple Store’s paper information sheets with iPads is meant to associate the company with innovative values in the minds of the young, who can be found in no small number huddled around MacBooks Air and Pro once the school bell rings. By providing a broader cultural context for its products, Apple has extended its realm of influence and relevance accordingly. If you’re into computers, Mac-vs-PC has been a “religious” debate that stands for much more than AltiVec engines, clickwheels, and advertisments featuring Justin Long. In this context, the seemingly spartan Apple Store becomes a church not only for retail consumerism (Bad!), but also for the valuation of design and innovation (Good!).

Organizationally, Apple has long been known as the corporate equivalent of a maximum security prison. For all of the cultural context the company enjoys, very little knowledge has escaped in terms of how things are actually run. I’ve known a few employees of Apple who were vocal about bad experiences resulting from the company’s misalignment with values assumed to be present from the outside, but it’s hard to get a conversation with anyone really enjoying it – they’re likely to be characterized by their total silence and ear-to-ear grins. It’s only been in the last few weeks that a snapshot of contemporary management structure and process at Apple has been revealed, in a fascinating article in Fortune by Adam Lashinsky (that you have to buy for $0.99 on the Kindle store, or for $4.99 as part of the issue on iPad)

Fortune's Apple Org. Chart images are behind a paywall... try one of these on for size.

Depending on how indulgent you are with your fanboy status, “Inside Apple” is either bleak or drool-inducing. It seems that the legends of Steve Jobs flipping out are very much accurate, and that his singular and legendary attention to detail is alive and well in spite of recent medical leave(s). On the other hand, Jobs’ affordance of evil genius-grade resources and opportunities to select crack teams also appears to be business-as-usual. One of the real gems of the article is an organizational chart unlike any you’ve seen for companies of Apple’s size. Instead of tiers of sprawl in upper management, there are really only a few layers: Steve Jobs, his cadre of VP’s, and pretty much everyone else.

A spiritual leader like Jobs, whose vision statements were inspired by tabs of acid and who steered the invention of at least five of the most notable technological products of the last forty years, is going to be awfully hard for a company like Apple to write into its operations manual. The article in Fortune explores how the company has been making attempts to build corporate courseware (which I don’t expect to see on iTunes University any time soon, although it would be the design and business curriculum of the century) that institutionalizes Jobs’ rhetorical explications of the meaning of life into something teams can learn from, outside of the reality distortion field.


One strategy for creating a more inside-out culture at Apple.

Apple’s operations revolve around a benevolent dictator of design, and are surrounded by a devoted fanbase of hundreds of millions. If cultural innovation is about putting what’s inside on the outside, Apple has plenty of work and opportunities ahead. For all of the tools the company has shipped, and for all of the tools required to develop them, Apple has never really opened up with regards to its process. As the so-called Second Coming of Steve Jobs (who has an official biography coming out next year) begins to wind down, it will be fascinating to look at how the company reconsiders and then formalizes its inner values, processes, and toolkits.