All posts for Tag: guest editor

100% Organic Content

When I started my guest editorship of this blog almost a year ago, it was with a question about the relationship between tools and creations. Will new ways of doing things affect what it is that we do? To try and flesh out my feelings on that question, I’ve peeked into a number of fields where innovation is intense; from documentary filmmaking to game design, architecture to (my) marriage.

But there’s one branching network of innovation that interests me so much that it’s found a home in the roots of a number of projects I’m working on. It’s the realm of biotechnology, and the disruptions it could bring to the stories we tell and how we tell them are significant.

In his book Sketching User ExperiencesBill Buxton includes a number of figures exploring the ways in which various technologies have evolved: from university or military research projects, into skunkworks industrial concerns, then into consumer products and (sometimes) multi-billion dollar success stories. From the mouse to advanced gesture recognition to the Internet itself, numerous information technologies widespread today took twenty or thirty years to make any inroads in the mass market, and have enjoyed lifespans as long as the average human’s. While it’s possible we’re projecting our mortal uneasiness on to our digital creations, the more obvious implication (that Buxton also arrives at) is that getting things “right”, or even “OK”, takes an awfully long time.

See how long it’s taken the Human Genome Project and its forebears to gain steam? See how long it took gesture-based interface technology? See bright things in your future / bloodstream?

From Bill Buxton's book Sketching User Experiences, all rights reserved

If we keep this lag time in technological adoption in mind, perhaps looking to the nearest experimental physics laboratory for design inspiration isn’t the way to go… instead, we should be looking to the steaming pot of technological innovations from the last forty years, and trying to figure out what is right on the brink of boiling.

Time and time again, when I sit down and play this “Where’s Waldo” game, what I can’t help but getting excited about biotech – design and engineering endeavours bringing living systems and advanced information technology together. I’m certainly not alone – a 2005 report by the US National Science Foundation highlighted the ways in which so-called NanoBioInfoCogno revolutions could transform the world’s industries and societies rather extensively. There’s even an essay by Newt Gingrich sandwiched in the middle, if you’re feeling naughty.

What door do experience designers use to enter this party? Aren't soft skills a critical part of this convergence?

But while the technical breakthroughs and complex technical underpinnings of those transformations are exciting, what I find most interesting are the questions to do with products and services. And not just the economic and business model questions – the experience design questions, as well. What new stories will NBIC innovations allow us to tell? With what new tools will we weave them? Will contemporary models of interaction apply between form and content, or will we witness the mass emergence of invasive species and new food chains in our media ecology?

Should society take the red pill, or the blue pill?

In design fiction, the practice of creating artifacts and experiences from future hypothetical scenarios, biotechnology already runs rampant. Particularly in science fiction cinema and literature, it’s hard to put down your glass without spilling someone else’s cocktail of personalized medicine (with a wetware wedge). Paul Di Filippo refers to the domain as RiboFunk in his writing. Inception, The Matrix, A Scanner Darkly, Children of Men... these films and many more all have their narratives woven finely through a lattice of biotechnological products. In gaming the situation is similar, from Assassin’s Creed to Deus Ex (both intellectual properties developed in Canada… perhaps this is our thing?) Björk’s collaboration with Scott Snibbe, Biophilia, is an interesting new kind of cellular entertainment in a number of senses. The subject matter is even beginning to pervade the meatspace… I’m in the midst of building out an experience called ZED – it’s a transmedia biotech role-playing adventure that will unfold across Toronto over the next eight months, letting audiences take serious bites out of a story exploring the darker sides of our potential future.

If these are the stories exploring a world of ubiquitous biotechnology, what new tools might emerge to tell them? Some say those tools are technological – Rohit Talwar of the foresight consultancy Fast Future suggests that by administering narcotics and nootropics activated by electromagnetic stimulation, the DJ’s of the future could take us into personally curated altered states… with the requisite $20 cover, of course. The BioArt work of Steve Kurtz famously (and disturbingly) landed him an audience with threats of bioterrorism charges, setting an intimidating precedent for the use of living materials and systems in artwork exploring our relationship with technology. Nevertheless, I’m looking forward to the MakerBot Replicator of 2024… there’s nothing like printing a custom OLED slime mold  and enjoying interactive TV on all the walls (and ceilings) of your house.

On the other hand, perhaps the tools that most directly steer our relationship with biotechnology in the future will be our values. People have been using “organic biotech” drawn (and sometimes domesticated) from nature for the purpose of inducing altered states of awareness and interaction with information for thousands of years. The posthumanist movement adheres to the value of updating our neurological firmware for better adaptability to change, and on the conceptual level, it’s difficult to identify significant distance between their philosophy and that of an experienced shaman, or field ethnobotanist. That said, if the form of our future content creation suites is to resemble a patch, tab, or injection; we’ve got some serious social issues to address… at the speed of technological innovation.

When it comes to biotechnology, industrially or socially, there is very little that isn’t still up in the air. The industry is only now becoming profitable; and in the coming years leaders, policy-makers, and just about everyone else will face numerous opportunities to interrogate emerging technologies and make decisions about their use. But as the DIYBio movement also surges, traditional stakeholders from industry and the media alike run the risk of being overwhelmed by a rogue wave of indie biotech experience designers. This is important to keep in mind – as a civilization we aren’t always very good at cross-generational and open-minded discourse on ethnopharmacology, as Richard Branson has been pointing out quite frequently of late.

Generally not a great thing to see at a biotechnology convention.

Of all the ways to think about what could result from the rapid adoption of biotechnology into niches currently filled by established material and information technologies, the use of stories strikes me as the most interesting. Films, games, and other narrative media act like mirrors trained on our social values and behaviours as much as bacterial growth on the surface of new gear. There was an issue of HorizonZero in 2005 that really got me thinking about how we treat this topic, it’s still a great read.

While biotech may ultimately be responsible for reshaping how we tell stories and design experiences, I think it’s just as likely that the ways we reflect on biotechnological themes and subject matter today will shape the form these disruptive and revolutionary industries eventually take. The best way to consider the relationship between form and content is as a feedback loop: it’s never too late to get in early, and no matter how far you push the boundaries, you’ll never be done.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | www.longexposure.ca | www.zed.to

Happy Winternet

Photo courtesy of Adam Foster on Flickr

With the seasonal holidays upon us – from the Solstice to Christmas, Hanukkah to Kwanzaa – I set out to write a post exploring what these darkest of days in the Northern Hemisphere mean to me as an individual with technology on the brain. What I learned is that the holidays, for all of their affiliations with altruistic acts of kindness, are about as digital as the special effects dominating each December’s blockbuster cinematic offering. Focusing in on the systems and assorted trappings associated with the Christmas holiday I celebrate each year, it’s rather difficult indeed to separate the mistletoe from the machine.

The most notorious of connections between the winter holidays and technology are, blissfully, already nearly a month behind us. Black Friday and the following Cyber Monday, some of the most significant days of the year for the technology-infatuated, are attached to more than one quarter of the personal spending that takes place in the United States each year. How much of that spending is allotted to game consoles, iPads, and DIY kits for 3D printers remains a mystery to me, but after scoping out PDF flyers for ThinkGeek, Amazon, and the Apple Store it’s hard to deny the weekend’s economic significance. These days, trumpeting the arrival of the holidays a month before Christmas is hardly extreme – if you’re a Snoopy fan from my generation or younger, you may not even see the irony in It’s The Easter Beagle, Charlie Brown‘s shopping centre, decked out with holly by early April. The phenomenon of Christmas Creep appears to be gaining steam.

But the holidays aren’t (supposed to be) all about mindless shopping. Just as growing numbers of businesses are trying to find ways to measure the positive social impact of their innovations, growing numbers of consumers are turning to alternative giving approaches; and personalized donations to not-for-profit organizations, charities, and aggregators like CanadaHelps. It’s not hard to imagine a future where contributions to the Salvation Army are offered by dinging one’s phone against a ubiquitous (PRODUCT) RED bell, completing a transaction through the postmodern miracle of near-field computing.

In the meantime, we have a rather serious holiday reality to face in my household… and it has to do with the coniferous “plant” inhabiting our living room. While life cycle assessments of artificial trees from the Pearl River Delta versus the kind taking up over 120 square kilometres of Ontario’s bucolic landscape do seem to indicate that a PVC tree is a more ecologically responsible choice than a farmed Douglas Fir, it takes about 20 years for the plastic version to break even. New fads from fiber-optics to holographic mylar branch treatments complicate any analysis of what is presently the planet’s first human-made invasive species, but perhaps the Christmas tree’s future is bright. A global team of researchers published a report earlier this month suggesting the feasibility of an artificial leaf that could be twice as good at doing photosynthesis as what you’ve been slinging from your eavestroughs and bagging for the last two months. Who will need giftcards or presents when the seasonal decor of Canada’s snowed-in homes can scrub carbon dioxide out of the air as fast as a tropical rainforest?

As cultures have hybridized and local economies globalized over the past few hundred years, one of the touchiest issues raised has been how to refer to the diversity of world holidays celebrated during our winter months. The city of Birmingham proposed “Winterval” for a few years, one of the writers of Seinfeld introduced us to the concept of “Festivus”, and affiliates of the Pastafarian movement (praise be to the Flying Spaghetti Monster’s noodly appendages) have settled on the to-the-point “HOLIDAY.” But for a secular and technology-loving guy like me, none of these titles really fit the bill.

In hopes that a look at the winter holidays through the lens of personal technology would inspire me to come up with a new name, away to the Windows (I mean, Mac OS) I flew like a Flash (I mean, HTML5 canvas). What I learned stunned me: Did you know that the first public client-server communication over the Internet was conducted on December 25th, 1990? That makes this Christmas the World Wide Web’s 21st birthday… perhaps it should be the one cracking open the rum and egg nog, not me. But the connections don’t end there – Claude Chappe, creator of a semaphore system referred to as the first “mechanical Internet”, was born on the 25th of this month in 1763. 180 years later, so was Rick Berman, (notorious) producer of  Star Trek series that inspired so many in terms of gadgetry. The same date marks the birth of Clara Barton, a patent clerk responsible for the organization of the American Red Cross. If the contemporary holiday season is characterized by contrasts between economic, technological, and altruistic factors, then I think these achievements and figures are as relevant as any historical precedent or astrological phenomena.

Since I was a child, I’ve been told that the real value of the holidays is an opportunity to step back and consider the opportunities I have, and how to pay the benefits I enjoy forward to others. Perhaps the world’s technology conglomerates, bestowed with so much opportunity and wealth, could take such a philosophy to heart. Rather than soaking up the revenues of yet another Cyber Monday, imagine a holiday when the largest tech innovators practiced a unique version of “alternative giving”, donating one or two of their most closely-held patents to the public domain each year.

Until then, I’ll continue to practice a modified version of a childhood Christmas ritual… leaving a plate of $10 and $20 bills out with a glass of milk for Jimmy Wales, and the rest of the elves at the Wikimedia Foundation.

Thanks for reading, have a Happy Winternet! If tossing CFC Media Lab staff at various flimsy structures for kicks is more your thing, then check out Angry Elves, the Lab’s tongue-in-cheek gift to you this holiday season!

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing his MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | www.longexposure.ca | www.zed.to

Languages and their SQLs

Languages are the vessels of culture and history. Unfortunately, many of them are also going extinct at an alarming rate. As English, Mandarin Chinese, Spanish and a handful of other languages dominate the globe, ancient tongues are often unable to compete for a userbase of fluent speakers; leaving them vulnerable to some of the same selective pressures that might unseat the inhabitants of an ecological system.

The organization Ethnologue estimates that there are approximately 6,900 languages in use today across the globe. Many scholars argue that the actual number could be much smaller than that, given that many languages are closely related to one another, and are spoken fluently by very few people. Michael Krauss (a linguist and language documentation advocate) and UNESCO estimate that up to 80% of global languages are at risk of extinction in the next 50-100 years.

Patricia Ryan on what we miss when we insist on English.

But while the languages we humans have spoken for thousands of years are experiencing a global crisis not unlike the one facing most of our mammalian cousins, there is one linguistic taxon that is experiencing a period of growth and prosperity – computer languages.

In the same period of time that most of the world’s languages have become classified as endangered, more than 8,000 computer languages have come into existence. The exact number is difficult to determine, though the Programming Language Popularity project and Transparent Language Popularity Index are attempting an index. Many computer languages are close cousins, and a few dominate the landscape on the shoulders of others, but it’s interesting to note that as we move towards a real-world environment of linguistic homogeneity, we’re experiencing an explosion in virtual linguistic diversity.

From a great history of computer languages in Wired, at the DigiBarn

But for me, here’s where things get puzzling. While I’m an English speaker (the so-called language of the Internet), an elder digital native (an evolution of Mark Prensky’s term), and a creative type when it comes to all things technological, I don’t speak a single programming language – certainly not anything like Glagol, anyway. How could this be? How could someone raised on computers, employed and engaged in the interactive industry, and pretty geeky by many standards, not be able to program?

I said that I can’t program… but depending on your definition of the term, that’s not entirely accurate. SQL is not a language I speak (how many programmers do “speak” in their tongue?), though I do read and write HTML, and I have been known to dabble in simple JavaScript. I’ve never written a graphics driver, but I do muck around with some frequency in software suites like Max/MSP and Quartz Composer. My introduction to “app development” was with HyperCard, and Macromedia Director, not Objective C.

If computer literacy is defined as familiarity; perhaps acquired through rote learning, with a set of specific tasks and workflows; and computer fluency is defined as the ability to apply advanced concepts about the functionality of a computer and its languages to the solution of problems; then I have a real problem when it comes to classifying myself. I’ve always felt comfortable in front of a computer, particularly when I don’t know exactly how to approach the problem before me, or where an application of critical design thinking is just the ticket to help identify a solution. You could say that I’m conceptually high-functioning, and essentially literate, but can one be classified as fluent without a mastery of the basic skills underpinning the whole Wizard of Oz show?

The tools that I learned to “program” interactive experiences on were tremendously eye-opening, but rather limited when it came to compatibility. Flash and Director relied on proprietary plug-ins to play back content from a CD-ROM, or (gasp!) in a browser window. At the same time, the HTML standard managed by the World Wide Web Consortium promised increased compatibility… but fewer flashy features. No tweening, translucency, or motion graphics for you, open standards zealots.

The Wilderness Downtown: Digital Natives' Citizen Kane.

HTML5, the most recent version of the W3C’s standard, is bringing much of the glitz of Flash and Director back into the browser… and in a way that ensures broad compatibility without the restrictions of proprietary technology. If you’ve seen Arcade Fire and Google (and Chris Milk’s) collaboration, The Wilderness Downtown, you’ve got a good idea of what HTML5 enables. That said, I suspect that most people are familiar with HTML5 and its potential only because of Apple’s famous decision to “ban” (the oft-delayed and resource-flogging) Flash from its iOS devices.

Perhaps designed to cater to people like me (high-functioning conceptual creatives who flunked out of math class), a new and interesting generation of production tools is emerging that promises to address issues of compatibility and ease-of-use simultaneously.

Applications like Tumult’s Hype (and Adobe’s Edge) are offering the ability to author slick HTML5 experiences through an accessible interface… it’s what iWeb probably should have been on day one (and probably will be within the next few years) – a web-app creator for the rest of us.

Macromedia's Director vs. Tumult's Hype... interaction design for the rest of us?

While these tools are incredible, offering users with an understanding of design patterns rather than code the ability to churn out new content and product, they’re troubling in that they rely on increased computer literacy without offering much in the way of enhancing fluency. They offer enormous numbers of people previously not capable of authoring interactive content experiences the opportunity to do so, with the caveat that they need not learn much about what’s under the hood in the process.

What might the long-term effects be of offering users a language of concepts and design patterns without an underlying vocabulary? Must all designers, programmers, or architects be linguists in order to produce meaningful and innovative work? What connections might exist between computer literacy and fluency, and the bigger social picture? In the United States, the level of a person’s functional literacy can be roughly correlated with income level and risks associated with committing a crime. Are technophiles like me, managing high-level semantic fluency with few syntactic skills, an at-risk group for media piracy or affiliation with Anonymous?

In the real world, we are witnessing a struggle to preserve languages in the face of an almost virally expansive linguistic homogeneity. What makes us think that the emergence of a dominant digital language (even if it’s an open standard) will enable an equitable representation of ideas, views, and other information in its realm? Perhaps the legacy of projects like the Long Now Foundation’s Rosetta will be to digitize languages on the brink of extinction today… so that their unique attributes might be analyzed and adopted into the computer languages of tomorrow.

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing his MDes in Strategic Foresight & Innovation at OCAD University.
@trevver | www.longexposure.ca | www.zed.to

 

Security Toolkits

Earlier this week, Howard Schmidt (Cyber-Security Coordinator of the Obama Administration) launched the U.S. International Strategy for Cyberspace. Only days before, Howard Stringer (CEO, President, Chairman of Sony Corporation) announced his own cyber-security strategy of sorts.

Unfortunately, while Schmidt’s strategy was presented as pre-emptive, Stringer’s came in reaction to a real-world cyber-security disaster. As you might be aware if you watch the news or if Turismo and XIII make regular appearances in your vocabulary, Sony pulled the plug on their Playstation Network nearly a month ago following what could be the largest theft of personally identifiable information in history. While the Network was brought back online last week, and no credit card fraud has been confirmed, the fur has probably only started to fly. Lawsuits claiming damages in the billions are emerging from individual Network users as well as organized groups, and government reaction to the leak (and the systems put in place to deal with it) has been decidedly less than sunny.

The disaster Sony has on its hands reminds us that security systems and lockboxes are actively designed in competition with those who would break into them, and that in this digital age the need to innovate in the creation of new security toolkits is more pressing than ever. While the activists behind the group Anonymous (famous for its Operation Payback pseudo-political attacks on Visa, PayPal, and Sarah Palin) have denied responsibility for the attacks on Sony’s servers, it’s difficult to gauge the value of their statement when it comes from a globally distributed and largely decentralized organization. Thieves and hooligans alike are innovating with their tools, culturally and technologically, but what innovations on the horizon promise a more secure future for our personal digital information?

When awkward about introducing online anarchist groups, trust Know Your Meme.

The European Union and Google both announced last year that they were considering employing crowdsourcing tactics in their war on electronic crime… but both also seem to want to use the world’s part-time cyber-sleuths to point out bugs potentially relevant to sensitive user data systems, rather than actual B&E or theft.

The U.S. Office of Naval Research is rolling out a transmedia simulation experience this month that crowdsources solutions to Somali pirate hijacking scenarios, but they don’t appear to be moving in the direction of addressing e-crime directly, either. Mozilla and the open-source root of Google’s Chrome, Chromium, also engage in vigilante justice-style approaches to software security. Presumably a whole new ecology of web interfaces and experiences would be required in order to mobilize an online citizen justice force more directly in the war on electronic crime.

You can always count on xkcd to take the edge off of new tech.

Many in the online security game over the years have pointed to the potential significance of quantum computing technology to the realms of information security. Existing Advanced Encryption Standard keys of 128 or 256 bits have proven secure enough to stand up to most of today’s hackerly abuse, but the 56 bit DES keys that they replaced as recently as the late 1990′s can now be broken in just hours.

The impacts of quantum computing on the online security game are twofold – the technology could make it significantly easier to quickly identify whether or not a message or packet of information has been compromised, but quantum computing’s advanced pattern-seeking computational capacity could also be put to use cracking today’s AES encryption keys in the blink of an eye. Even the McEliece cryptosystem, an algorithm designed for post-quantum cryptography, has trouble standing up to a brute force attack under certain circumstances.

BitCoin in use around the world. Clearly not popular at all.

The emergence of new and increasingly pervasive products and services built upon our personal digital information is a fascinating phenomenon to watch. While Sony’s information leak has been front-and-centre in the news of late, more and more attention is being paid to disruptive innovations like BitCoin, the distributed currency system created over two years ago by the mysterious Satoshi Nakamoto.

By going entirely digital with the currency itself, BitCoin enables completely anonymous and yet also decentralized transaction records and economic controls. While the system makes use of innovative applications of public key cryptography, its security and viability in the long term remains uncertain. What seems most interesting about BitCoin is its flexibility over time – given that the system is entirely digital already, what could be referred to as economic policy changes could be rolled out in a manner unfamiliar to fans of Lizzie the Deuce and Mack King. (Unfortunately, Canadian bills don’t slang-ify as well as their American counterparts)

Sony has room inside the PS3 design for one of these, right?

With currency looking like a candidate social technology for digitization, what could be next. LinkedIn’s potentially $4,000,000,000 IPO in progress, it’s hard to imagine that the other big social networks will be far behind. While swelling valuations and growth strategies are surely front-of-mind for these emerging business titans, I wonder how the information theft scandal that has plagued Sony this past month is forcing them to reconsider security strategies.

How many of these companies are allocating significant resources towards the creation of the new toolkits required to secure their information service ecosystems? What families of solutions will they employ? How many will follow in Sony’s footsteps and realize the dangers (real and reputational) of failing to innovate in terms of information security?

Trevor Haldenby is an interactive producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

Frontline Documentary

On January 12th 2010, violent earthquakes ripped through the earth’s crust 13 kilometres beneath southern Haiti. Hundreds of thousands of people lost their lives, and millions more were displaced from their homes. Massive collapses took place across health care, transportation, and communications infrastructure systems. Within hours, this world ripped apart was descended upon by a sea of visitors from across the globe. Supplies and rescue personnel from the Dominican Republic arrived in Haiti alongside their equivalents from Iceland, China, Qatar, and Canada. One of the groups that arrived in Port-au-Prince on January 15th was made up of members of the Red Cross Field Assessment and Coordination Team (FACT).

The members of the Red Cross FACT Team had come to Haiti to manage and oversee logistics around the international disaster recovery. But they also brought with them a team of photographers and filmmakers, as well as a web content producer. The first group was intended to produce a three-part documentary for TVO on the challenges FACT faced in stabilizing the situation, and the second unit was dedicated to capturing additional individual and social stories for a set of interactive segments.

The documentary team attached to the FACT was capturing the chaos and upturned humanity of Haiti in the weeks following the earthquake for a project called Inside Disaster: Haiti – a densely populated and excellently designed multimedia information resource on the Haiti earthquake, and ongoing recovery efforts. Inside Disaster provides clean and understandable data on the history of humanitarian aid; media assets presenting objective views of the disaster; and interactive mini-documentaries exploring the experiences of survivors, journalists, and NGO workers.

This week, I had the opportunity to sit down with Inside Disaster’s Web Field Producer, Nicolas Jolliet. We discussed his involvement in the project, and his understanding of the relationship between innovation and the evolving media landscape.

Presently, Nico is spending his spare hours pioneering the integration of numerous robotic and sensing technologies into a tool he has (in collaboration with whole communities of independent filmmakers) been dreaming of for years. After relocating to a nearby park, he demonstrated his newest creation to me: a helicopter-mounted DSLR camera rig capable of lifting several kilograms of equipment to altitudes of thousands of feet. While the rotors spin too noisily to let him track the birds migrating through Toronto this time of year, his helicopter (I suppose it’s technically a hexicopter) will be an invaluable tool for documentary production. The tool provides high quality HD footage (check out a tropical demo video on YouTube) along GPS-routed paths, at a meaningfully human scale that has previously been difficult to negotiate in the field – higher than a dolly or third-story window, and lower than a helicopter. It’s easy to find more adaptations of technological tools for the frontlines of filmmaking in a blog post Nico published for Inside Disaster, before the earthquake in Haiti even determined the project’s setting.

I shot this video of Nico testing out his hexicopter in a Toronto park.

Many new documentary filmmaking technologies, from super-light camera rigs to DIY cranes and follow-focus units, have emerged out of lead-user communities. Eric von Hippel coined that term in 1986 in reference to groups of hobbyists doing a better job designing products than product designers. Some of the earliest innovations in 35mm adapters for HD camcorders came out of indie filmmaker messageboards and fora. Companies like Redrock and Zacuto quickly realized that a product category had emerged, and that they could make quite a bit of money selling premium versions of what the cutting-edge filmmakers had identified as offering a significant competitive advantage. The DSLR revolution in independent filmmaking is presently closing the loop – offering the look-and-feel of 35mm cinema at a relatively low price, with minimal hassle (depending on your preference for P mode over M). Nico built a mind-blowing customized collapsible crane and dolly system for a documentary he shot in the Amazon a few years ago, but apparently you can already buy one that’s lighter (if not on your wallet) from one of the big DSLR rig manufacturers. There’s a great interactive slideshow on the Inside Disaster site that explores some technology specifically adapted for disaster recovery, as well.

One of the most interesting things I learned about Inside Disaster from Nico was how the crew challenged themselves to compress documentary workflows into live journalistic timelines. To produce content for the Inside Disaster website and social media channels, Nico was working 24-hour shifts of concept development, travel, filming, editing, writing, compression, blogging, and transmission. While documentaries used to be associated with months or years of incubation prior to release, new expectations in an age of always-on media have condensed the equipment and responsibilities of entire film crews down into the hands (and backpacks) of a single operator. Nico claims that backgrounds in languages, musical performance and production, photography, writing, and filmmaking helped land him the job as Field Producer.

In a recent piece for Point of View magazine, Katie McKenna, the producer of Inside Disaster, noted that the project launched in two phases – one entirely dedicated to logistics and pre-positioning within social media networks, and another dedicated to storytelling from teams in Haiti with a long tail made up of resequenced content for branching online interactive experiences. But while open-sourcing the marketing and distribution is one way to embrace emerging toolkits in documentary production, open-sourcing the production and storytelling is something altogether different.

In the world of ethnographic research, everything exists within a cultural context. In isolated or marginalized communities, just getting a realistic research understanding of the landscape in which cultural values are situated can be a difficult task. Photovoice, a participatory research method pioneered in the late 1990’s by Caroline Wang and Mary Anne Burris, operates through the provision of cameras to these communities in hopes that the process of taking and analyzing pictures will stimulate the community to engage in critical dialogue around the opportunities and challenges it faces. In a way, photovoice represents an evolution of the documentary form in the direction of open-source ideation and production. Inside Disaster hails from a different lineage, but broke new ground in other ways relative to the social commons. All of the photos and footage that Nico transmitted each night from Haiti are available under Creative Commons licenses from Flickr and YouTube. While there is plenty of context from frontline journalism and documentary cinema surrounding the Inside Disaster content, open-source status is itself a significant challenge to the patterns and structures of mainstream media.

Discussing these topics in Nico’s Toronto studio, it was difficult not to let my mind wander to the feedback loops linking new innovations and the tools that support them. Guitars and mixing equipment undulate across one of Nico’s walls, and a fully realized robotics bay juts from the other. While inspecting his newest hexicopter prototype (comprised of firmware, structural elements, and sensors tweaked from upon the shoulders of thousands of collaborating un-experts online), I asked Nico how he would feel about me outing him as a closet engineer. He laughed, and told me that it’s not really about engineering, it’s about using new tools to reach new levels of quality in production values and accessibility. “It’s about creating a Steven Spielberg film from your backpack.”

This year, Inside Disaster provided the world with unmatched views of a human and natural disaster. In order to meld the traditions and techniques of journalism and storytelling with the realities of an always-on media landscape, Nicholas Jolliet and the production team of Inside Disaster brought new innovations not only to their vision for documentary storytelling, but also to the tools required to realize it.

Trevor Haldenby is a producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

Retro Activity

It seems like wherever I go these days, I wind up looking at the past. Half of the photos my friends share on Facebook and Twitter are run through a gamut of filters to look like they were shot in 1969 (that’s even the name of one of Hipstamatic’s workflows), and now the same thing is happening with video thanks to apps like 8mm Vintage Camera.

While a major trend of late in digital content creation tools has been the retro styling of interfaces and artifacts, the last several years of PC and console gaming could be seen as a pilgrimage in the opposite direction. Solid Snake, Nico Bellic, and Nathan Drake all furrow their blemished brows and glower at us menacingly in 1080p… at least Master Chief had the decency to put on a helmet.

But not all games operate within this paradigm of photorealism – there are families of titles evolving on the plains outside of the Uncanny Valley, and groups of developers more interested in experimenting with gameplay than participating in the arms race of shader technology.

A great example of this trend towards retro visuals is the Swedish indie juggernaut known as Minecraft. The premise of the game is simple – wake up in a wilderness, prance about exploring for most of the day, find a way to build shelter before nightfall, avoid becoming dogfood for a menagerie of roving monsters – but the really interesting stuff at work in Minecraft is in the context of sandbox gameplay and open collaboration. You can work with friends online to architect elaborate in-game underground fortresses, treehouses, or even working arithmetic logic units; but everything you build has to be crafted from natural substances mined from the world around you and represented by blocks about one foot by one foot in size. While the world of Minecraft is vast, it’s also quite graphically granular. The experience looks and feels more like the result of some macabre mash-up of panspermia and Tetris than other sandbox games like Garry’s Mod, Little Big Planet, or Second Life. The easiest way to describe Minecraft’s gameplay to newcomers is as a digital version of Lego… a version where each block must be carefully smelted from elusive minerals at the core of the earth.

Trevor Haldenby's Minecraft Kingdom

My Minecraft Kingdom... not so meta-meta.

What could have compelled the game’s creator Markus “Notch” Persson to employ such a distinctly retro style in the creation of such an innovative game? And what features of the game are responsible for the sale of more than 1.8 million units in the last year?

Performance
Minecraft is built and sold as a Java application. As many have discovered, it runs in a corporate web browser approximately as well as it will on a dedicated gaming rig. 1999‘s Quake III finally moved into the browser as “Quake Live” last year after heavy modifications, but Minecraft was there from the start by drawing in the thousands of blocks making up each world dynamically and by not using particularly elaborate textures. You can customize your in-game character on the minecraft.net site using a 32×32 pixel image… about a third the size of what made for a decent LiveJournal icon ten years ago.

Familiarity
It seems like it’s often assumed that hyper-real graphics will feel good because they’re similar to how we perceive the world with the HD cameras embedded in our faces. The purveyors of gigabyte-packing graphics cards surely presume that visual accuracy is what’s behind the verisimilitude of a good gaming experience. But what about those of us who grew up under the supervision of the Super Mario Brothers and a 12” TV, or their ancestors from the Old Country of Atari? I think it stands to reason that 8-bit graphics and simplistic animations make the average 20 or 30-something gamer feel more at home than anisotropic filters.

Mechanics in Focus
When you’re playing a photorealistic 3D title you’re probably going to invest less effort into considerations of underlying gameplay mechanics than you might if you were enjoying a basement romp in a refrigerator box. Games defined by shiny pretty things certainly have a time and place, but when you’re playing a title that deliberately immerses you in a lo-fi look-and-feel, you’re more likely to be pleasantly surprised by the ingenuity or complexity of the mechanics at work.

Kenfagerdotcom's Minecraft Kingdom

Kenfagerdotcom's Minecraft Kingdom... meta-meta to the power of meta.

Minecraft isn’t alone in utilizing retro graphics to get audiences engaged, before challenging them with innovative gameplay concepts. Jason Rohrer and Daniel Benmergui are both developer-artistes putting out engaging and genre-busting titles with beautiful 8-bit looks.

Screenshot from Jason Rohrer's PASSAGE

Screenshot from Jason Rohrer's PASSAGE

If you’re hungry for a particularly well-executed experiment in innovation through nostalgia, there’s a brand new Toronto-bred iPad title you’ve got to check out: Superbrothers: Sword & Sworcery EP, a collaboration between Capybara Games, the Superbrothers squad of visual artists, and Jim Guthrie. If you spent any time with Sierra’s King’s Quest in 1990 (itself a visual retooling of the Adventure Game Interpreter 1984 original, rebooted once more last year), you’ll feel eerily at home here. But after only a few minutes of play it becomes clear that Sw&Sw is about experimenting with social gaming features that the retro aesthetic might have prevented you from anticipating. For instance, all of the game’s dialogue takes place in the form of 140-or-fewer letter exchanges – enabling players to tweet conversations as they progress, from within the game’s HUD. It’s quite a clever little innovation, allowing players to share their progress through a game that doesn’t quite align with the High Scores ‘n Headshots model of friendly competition familiar to many console gamers. Even the title of the game is displayed on my iPad’s home screen as a hashtag.

Superbrothers Sword & Sworcery EP Screenshot

Screenshot from Superbrothers: Sword & Sworcery EP

Could Capybara have gotten away with encouraging Sw&Sw players to tweet their progress in a photorealistic first-person shooter version of the game? Possibly. Could they have maintained as much of the delightfully corny Your Highness-esque dialogue with such an approach? Perhaps. But could they have made audiences from 15 to 35 feel immediately comfortable with the title while embracing its innovative idiosyncrasies? I’m skeptical.

For a particular group of gamers born in the final decades of the 20th century, 8-bit is the definitive visual vernacular – the lingua franca spoken by fans of racing, RPG, and shoot-em-up titles alike. Perhaps these audiences simply take comfort in the styles associated with a particular era of game development (just as classic rock inevitably trumps auto-tune in the minds of members of my parents’ generation), or perhaps there are valuable lessons to be learned here about how innovation can emerge from the juxtaposition of new ideas with the obviously ancient.

Trevor Haldenby is a producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.

New Media Generation(s)

Trevor Haldenby is a producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University.


I would like to use my time as Guest Editor of to explore an idea that has stuck with me over the last ten years: designing something new often requires new tools.

The Strategic Foresight Group defines foresight as the merger of forecasting and insight. While I’ve long been a fan of clever enjambments in terminology, there really is some value to this one. Forecasting is an interesting tool, whether it’s being employed at the offices of brokers on Bay Street, at CTV’s meteorology desk, or on Yonge Street in the parlour of a psychic. All of these forecasters use specific models, methodologies, maths, and mantras in their process; and they each have unique sets of insights that provide the foundations for them.

Design is one of the areas where strategic foresight is really gaining traction, and in looking at the challenges presented by the evolution of digital media, you can see why.

While the weatherperson makes predictions that are influenced by climate change, his or her general assumption is that a new type of cloud will not suddenly emerge and wreak havoc upon Southern Ontario. Similarly, many of the traders on the TSE’s floor are thinking about change in terms of corporations and currencies, not sweeping economic models. When it comes to the design of new media and underlying technologies, you really do have to assume that what you’ll be working with in five or ten years’ time will be conceptually connectable to where you started, but changed almost completely in many other ways. When Tim Berners-Lee was developing HyperText Transfer Protocol and Universal Resource Locators, he probably didn’t think his technological darlings would even survive to become adolescent acronyms… let alone act as the foundation for petabyte-pushing video servers and social networks. Media (and media decisions, such as the CRTC’s regulations on bandwidth caps) may live long and tumultuous lives, or they may be swept away by the next disruptive technology on the block within just a few years. As people adapt their habits to match emerging media, they wind up inventing new behaviours and expectations that could generate boom or doom for existing media, and shape the subsequent generation profoundly.

Although change is a constant fact of life, big-picture change can be another beast entirely – it often involves a rewrite of so many of a system’s functions and processes that the result can be almost unrecognizable. This is what is happening in media at an ever-accelerating pace, making the ability to design for an ever-increasing pace of change more and more valuable… whether you work in climatology, or interface design.

Richard Dawkins made his name as a popular scientist in the mid-1970’s talking about the Selfish Gene, but his self-professed favourite work (and mine) is one that followed shortly after, titled The Extended Phenotype. The term phenotype refers in biological circles to the expression of genes in an organism that make it what it appears to be. You, me, and a fruit fly all have big complicated genes with surprising amounts of overlap. But the attributes that make us recognizable as humans and not as fruit flies largely result from what’s switched on in what order, not what’s scattered all over, genetically speaking. Dawkins argues in The Extended Phenotype that the ecosystems encompassing an organism and its behaviour are just as much a part of its phenotype as the colour of its eyes, or length of its fur. The beaver’s dam, for example, alters the entire surrounding landscape, and that dam is a direct result of the beaver’s more conventionally considered phenotype. You’re following along nicely if this idea of a single organism (think system) radically remaking the world around it (other interconnected systems) seems familiar – it sounds an awful lot like a certain Hominid and its relationship with technology, doesn’t it? The media phenotype, if you will, really can fundamentally shift every 10-25 years; and even if countless historical fads and meaningful innovations – like radio and television – remain present either conceptually or practically, the essence of the broader ecosystem may be fundamentally altered.

All analogies and biologies aside, there is a big question here: if the design of our media really can change the surrounding world within a generation, what are the best practices associated with making sure those media are the best they can be?

Fields of study dedicated to phenomena that are very big, very impactful, very interconnected, or all of the above are unfortunately few and far between. Economics and politics seem to fit the bill (occasionally), along with a handful of the arts and sciences. But at this point in time, it seems to me that strategic foresight exists natively at the intersection of the very big in time and space, and the very impactful. More succinctly, the field could be seen as being concerned with the creation of tools and best practices for dealing with the very important, in the medium and long term.

Through talking with Suzanne Stein, an expert in the world of strategic foresight, I’ve developed an appreciation of how diverse work in the field can be. Some proponents are interested in making sure that the verifiability of foreseen scenarios is what we focus on… others are more intrigued by using scenario generation techniques to understand group creative process. The CFC Media Lab uses scenario generation workshops to explore group dynamics and collaborative creation, and the focus of its resident-populated programs is iterative prototyping activity that might be familiar to fans of high-end design shops like IDEO, or firms like Changeist founded by Scott Smith.

When I was a resident at the CFC Media Lab in 2004, I set out to create a new kind of hands-on educational experience for museums and art galleries. The resulting prototype was called Painting The Myth: The Mystery of Tom Thomson. While I was learning about a huge diversity of prototyping processes and new tools during its creation, what really struck me was how my understanding of what to do next felt like it was coming to me intuitively.

As interesting as this field of strategic foresight is, it’s really only one way to think about identifying and shaping new possibilities. It’s only one way to look at the design of things that no-one has seen, or even thought of before. The new media landscape is full of people who have been innovating in a myriad of ways, sometimes through regimented process, and sometimes on intuition alone. What are their stories?

During Steve Jobs’ announcement of the iPad 2 last week, he reiterated how Apple’s existence has always been at the intersection of technology and the liberal arts. This reminded me of how unique the space I work within really is. The work to be done in designing and cultivating new media happens at the intersection of business, technology, and art. What can we learn from those who are only beginning to articulate their own way of working in the field? What can we learn about best practices for designing new media from products, people, and organizations that have conventionally been considered outside of its realm of influence – teachers, artists, venture capitalists, ecologists, primary school students?

My plan is to spend the guest editorship I’ve been offered exploring the new media growing up around us, and sharing observations about the creation and adaptation of new tools, the formalization of process that has heretofore only existed as intuition, and innovation that is taking place in strange and exciting new ways.

Happy Winternet

With the seasonal holidays upon us – from the Solstice to Christmas, Hanukkah to Kwanzaa – I set out to write a post exploring what these darkest of days in the Northern Hemisphere mean to me as an individual with technology on the brain. What I learned is that the holidays, for all of their […]

Languages and their SQLs

Languages are the vessels of culture and history. Unfortunately, many of them are also going extinct at an alarming rate. As English, Mandarin Chinese, Spanish and a handful of other languages dominate the globe, ancient tongues are often unable to compete for a userbase of fluent speakers; leaving them vulnerable to some of the same […]

Security Toolkits

Earlier this week, Howard Schmidt (Cyber-Security Coordinator of the Obama Administration) launched the U.S. International Strategy for Cyberspace. Only days before, Howard Stringer (CEO, President, Chairman of Sony Corporation) announced his own cyber-security strategy of sorts. Unfortunately, while Schmidt’s strategy was presented as pre-emptive, Stringer’s came in reaction to a real-world cyber-security disaster. As you […]

Frontline Documentary

On January 12th 2010, violent earthquakes ripped through the earth’s crust 13 kilometres beneath southern Haiti. Hundreds of thousands of people lost their lives, and millions more were displaced from their homes. Massive collapses took place across health care, transportation, and communications infrastructure systems. Within hours, this world ripped apart was descended upon by a […]

Retro Activity

It seems like wherever I go these days, I wind up looking at the past. Half of the photos my friends share on Facebook and Twitter are run through a gamut of filters to look like they were shot in 1969 (that’s even the name of one of Hipstamatic’s workflows), and now the same thing […]

New Media Generation(s)

Trevor Haldenby is a producer and photographer living in Toronto. He has attended Wilfrid Laurier University, Rhode Island School of Design, CFC Media Lab, and is presently completing a Master’s of Design in Strategic Foresight & Innovation at OCAD University. I would like to use my time as Guest Editor of to explore an idea […]

-->