GeoServices provide a common API across all of GIS for users and developers to easily access information.
Today is GIS Day, an annual celebration and sharing to our communities about the use of geospatial technology to understand, affect, and engage with our physical world. You can join any of hundreds of local events around the globe to meet local experts, developers, analysts, government staff, and engaged citizens to learn about how to access open data and spatial analysis tools that help you make sense of complex relationships.
At the interface of GIS is a commonly overlooked, but incredibly powerful mechanism that makes it possible to uniformly access data regardless of the underlying technology or source of the data. It is this interface which allows for a smartphone application to work with web sites and desktop analysis tools—meaning that people can use the user experience that most fits their needs but they are working from the same common information system.
The web is designed on this same principle that loosely coupled systems are more scalable, flexible, and usable through community and industry standards. The HyperText Transfer Protocol, or HTTP, is the protocol that nearly everyone has become familiar with as we enter website URLs into our browsers. There are many comparable services such as SMTP for Email, FTP for file sharing, SSH for remote computer access and so on. These are application layers that work on top of the rest of the Internet protocol suite and enable Twitter to talk to Gmail to Amazon to Skype and so on.
In GIS, there are similar application protocols – the most common of which are the GeoServices standard which is the common API (Application Programming Interface) across all of ArcGIS. This specification is the nervous system of GIS makes all of the technology work together.
From interoperability to mashups, an API is the gateway to dynamically requests information. Through one common interface, any developer can access data independent of the underlying database, technology or information domain. This means that users can access parcel data, next to business and demographics, with air quality, all through the same interface. We are seeing thousands of organizations making their data openly available from the source through the GeoServices API.
The GeoServices documentation covers the broad and deep functionality, from simple data storage, filtering, to more complex statistical analysis, spatial processing, geocoding, andvisualization. Through an simple HTTP-based interface, application developers can create their own unique applications powered by data coming from GIS hosted by authoritative government agencies or enterprise business data centers.
Services that Grow
ArcGIS supports many open standards that make it easy for GIS to work in whatever application the user needs to work. What’s most exciting about the GeoServices interface is the comprehensiveness of capabilities, formats, and broad support. It is also a continually evolving specification that grows to support new features of the web such as realtime stream services.
So on GIS Day, learn how to use the world of data that is already out there, ready for you to explore and re-use in creative and meaningful ways.
Modern technology has dramatically increased the pace of software application development. Within hours a single person can now conceive, create and distribute an app to millions of people. Thanks to the global internet, access and updating of these apps occurs automatically and constantly. Products can be prototyped, measured, improved and updated many times a day. A result of this rapid iteration is the increasing evolution rate and validation of product capabilities that minimizes time to market.
Often referred to as “agile development” or lean, this process is a fundamental shift in how businesses achieve market adoption and customer satisfaction. By contrast, waterfall development historically meant long and disconnected cycles of requirements, design, development, testing and delivery that stretch interminably and often discover late in the process new opportunities or missing requirements. The cost of development and delivery time using waterfall processes can mean projects become “too big to fail” yet also fail to meet critical business and customer objectives.
Despite the general acceptance of this new agile methodology by businesses, government has not effectively adopted such strategies. There are many valid reasons why government organizations must be more deliberate in their technology solutions – legal requirements, inclusionary capabilities, and formal policies may all direct software development. However too often these processes lead to numerous examples of “failed IT projects” or high risk programsmeasuring millions or even billions of dollars in wasted resources.
There is a tremendous opportunity for government to learn and adopt these iterative, evolutionary short-cycles in order to serve constituents more cost-effectively and efficiently.Agile Government
The United Kingdom and United States, by example, have created new digital services focused on creating an environment that is accepting and functional using agile development when possible. They are helping evolve government policy, culture, and IT practices to make it appropriate and effective to prototype and develop software solutions in short, iterative cycles. Along with 18F, they are informing and affecting programs within government as well as vendors that support government.
The USDS Playbook is a practical guide to building successful and modern technology projects. This playbook was as a set of requirements in a recent request, 18F Agile BPA, challenged organizations to concretely demonstrate their ability to apply open and user-centered agile development towards government solutions.
Within three weeks, over 80 companies submitted rapidly developed prototype applications that used a new Open FDA programming interface. 16 companies were recognizedfor “[delivering] amazing, working software in response to our RFQ.”GIS for Innovation
We were pleased to participate and be recognized as one of the top submissions to the RFQ. Our Esri OpenFDA Prototype is a prototype, but compelling demonstration of how an integrated platform enables quick and iterative delivery. Our concept was informative and explorable understanding of food and drug recalls that may impact a local community such as a school nutrition specialist, or a healthcare official tracking larger scale health issues.
Working entirely through Github as our collaboration hub, we designated primary user personas based on our experience in the health industry and connected with real people that could provide meaningful feedback. Then our designer created low-fidelity user interface wireframes that we tested in quick 10-minute interviews with our persona users. During daily standup meeting we reviewed the previous day progress, feedback from in-situ user testing, and new or modified development priorities for the day.
Using geography, we were also able to use the US Census demographics data to provide context. In this example, we analyzed the food and drug recalls by state, including the the overall affected population.Prototypes to Production
Throughout the three week exercise, by performing daily check-ins we were able to measure our progress against the regular cadence. New information meant we could improve our overall project without threatening long delays or building unnecessary code.
This experience demonstrates a very fast process to validate and demonstrate concepts. While this project stopped, it is indicative of the process we employ across the entire ArcGIS product platform. Many teams work in weekly sprint cycles and monthly or quarterly software releases that culiminate in our yearly big releases. It’s a model for organizations of all sizes, from small startup, to multinational enterprise and government, to effectively respond to changing requirements and user needs.
If you want to know more about how ArcGIS can be used to support open and collaborative government projects, connect directly with Lauren Lipovic at at 703-506-9515 or talk with your local ArcGIS representative.
Today at the FedGIS conference in Washington, DC, the US Department of Homeland Security announced that Homeland Security Infrastructure Program (HSIP) is now HIFLD Open.Security and Safeguarding
In 2001, the United States was attacked in a coordinated effort at multiple locations which had devastating impact across the country. We had direct insight into how little we knew about our national infrastructure and assets that are necessary to both protect from attack and leverage in response.
One actionable response was the US Department of Homeland Security (DHS) and National Geospatial Intelligence Agency (NGA) collaborating to develop the Homeland Security Infrastructure Program, HSIP. For the safety of the nation, federal agencies connected and aggregated data from hundreds of regional and local data providers to compile over 500 national geospatial data assets that provide a complete picture to our roads, water systems, schools, communities, facilities and more. The result was the Homeland Infrastructure Foundation-Level Data Working Group, known as HIFLD.
For the past 15 years, HIFLD has successfully managed the data aggregation, curation, and dissemination of these data through computer discs. Access has been restricted to FOUO – For Official Use Only; effectively limiting the access and reuse of these data to pre-approved agency members.
The DHS mission takes an “All Threats, All Hazards” approach. But Homeland Security is more than just physical or emergency management. It needs to also address the economic security as well as non-traditional, invisible threats if we are to truly be a resilient nation.
They saw a need to empower every citizen to have an active role in our security. It was necessary to radically shift from yearly, physical media to realtime, on-demand and dynamic data available everywhere, by everyone.
Available today, HIFLD Open – public access to over 250 datasets as dynamic web services as well as up-to-date downloadable files and visualization tools for users to explore and use these data assets. Data such as Alternative Fueling Stations can help local governments evaluate transit infrastructure investments as well as fuel availability during a disaster event; shipping infrastructure indicates opportunities for new businesses that need access to goods transportation; and unique data like public refridgerated warehouses is important if there is a need to keep vital materials cold.
ArcGIS provides a hosted infrastructure for DHS that provides not only data as web services, but visualization and analytical capabilities to local governments and communities. Using ArcGIS as their Data Platform of choice, DHS was able to curate and publish these data across multiple thematic groups and have an easy user experience that makes these data discoverable and explorable by anyone.
National security requires a Whole of Nation approach, HIFLD Open marks the beginning of a new era that requires participation by everyone. We have an opportunity to be an Open Society, from citizen neighborhood watch, to city law enforcement, state agencies, businesses, and community groups. This large step into the Open by the DHS is also setting a precedent to other governments across the Federal, State, and Local levels to also make more data freely accessible.
No one cares about a neighborhood more than the people who live there. People spend their days and evenings along the street, raise children, foster connections with neighbors, build businesses, grow gardens, bike, walk, and live. A few choose to engage with their civil organizations, advocating for positive change, or against negative impacts; they participate in civic meetings, and some even run for office in order to have a professional responsibility to their community. Our fundamental goal of democracy is to expand the engagement and active participation to every person.
Contemporaneously, the internet has provided a platform for immediate and global access to information and people. An increasing majority of people carry a web integrated, sensor laden, geolocated mobile computer that makes this access ubiquitous and pervasive. Whether merely reading or actively publishing information, we have an unprecedented ability to interact with both our physical and digital worlds in coordination – essentially integrating our neighborhoods with realtime and historical data about us and our communities.
One of the primary roles of government historically has been to gather resources in order to build physical infrastructure such as roads, parks, and buildings such that communities and commerce can grow and flourish. Increasingly, a new role is for government to provide a digital public infrastructure, one which supports access to information, in order to improve the efficiency of government operations as well as enable more meaningful decisions by constituents. More than just websites, new digital services are more responsive, scalable, and optimistically more effective in serving people’s needs. Combined with open data, information analysis tools, and online forums.
All brought together, we have a tremendous opportunity to dramatically increase the agency of every citizen to have an active and integrated role with the operation of their government. Data-Driven Citizenship means that individual decision making, issue advocacy, and community planning are empowered with the tools and data necessary to make informed actions and observe the outcome of those actions. Instead of merely petitioning for change in a neighborhood, citizens can analyze existing and historical conditions, compare with similar situations and apply data modeling in order to suggest viable alternatives that government can most meaningfully enact. Other communities can connect and observe the factual and cultural outcomes of these projects to inform their own plans.
By example, as part of the The United Nations’ International Day of the World’s Indigenous People, seven tribes developed analyses and shared data narratives that educate on issues at hand, including climate change, forest resiliency and data sovereignty.
The city of Raleigh recently underwent a rezoning plan that could have dramatic impact on homeowners and businesses. Through community input on web tools for geographic drawing and comments, the city was able to make both broad city-wide improvements as well as understand the particular characteristics specific to local neighborhoods.
As part of their commitment to reducing pedestrian fatalities through the VisionZero initiative, Washington DC Department of Transportation provided open data through both web and mobile survey tools where bicyclists, pedestrians, and motorists could mark areas of the city that were unsafe due to a variety of causes. For instance, many intersections had short cross-walk times, which meant pedestrians may be caught in the middle of the road when cars started crossing; bike lanes had disconnected routes resulting in cyclists merging into the roadway with traffic that had difficult identifying the incoming bicycles.
Cities and communities around the world are discovering how emerging digital tools such as open data, crowdsourcing, and StoryMaps can make citizens more aware, and also active contributors in how government can improve neighborhoods for everyone. Consider your own initiatives and how data driven citizenship can make an extreme, positive, difference.
Signed by President Abraham Lincoln in 1863, the National Academy of Sciences was formed to “investigate, examine, experiment, and report upon any subject of science” for the nation, congress and federal agencies.
Considering this was the middle of the tumultuous US Civil War government saw the imperative for scientific research and collaboration. The period was also the scientific age of positivism which focused on empirical evidence, reason and logic. Since its auspicious inception, the National Academy has expanded to include Engineering and Medicine as a comprehensive, trusted organization that provides research leadership and experience to the nation.Geographical Sciences Committee
Last week I was honored and excited to be invited as a member of the the recently reconstituted Geographic Sciences Committee (GSC) of the Board of Earth Sciences and Resources. Together with my fellow committee members we are devising a research study strategy that focused on major issues and questions faced by Federal agencies and the scientific and engineering communities.
All reports are public and you can see previous focus on community disaster resilience, land change modeling and transformative research in geographical sciences.
The GSC is also the U.S. liason to the International Geographical Union as well as advises the National Academy Foreign Secretary on matters concerning international organizations, programs, and research.
Our tenure is through 2019, so for the next three years we will hear from communities on important research questions, convene experts and meetings to discuss state-of-the-art and opportunities, and produce high-quality reports that give context and direction to future work. My particular interests align with human geography and neogeography and how we can expand the awareness and utilization of geographical sciences by other domains.Government-University-Industry Research Roundtable
In June I also spoke at the Government-University-Industry Research Roundtable hosted by Policy and Global Affairs. The meeting, “Building Smart Communities for the Future”, shared experiences of smart communities around the world.
Our panel spoke about the opportunity for smart communities to support the United Nation Sustainable Development Goals and secondary cities. These cities such as Medellin, Cusco, and Mekelle are rapidly growing and modernizing but outside the typical perspective of large-sale global cities like London and New York City. They serve as visible and innovative incubators of technology, governance and community that better represents the majority of urbanizing populations.
Increasing access to information and communications through smart phones is changing the ability for citizens active role in their government and community development. How these capabilities evolve, empower people but also raise questions or privacy, equality and opportunity are imperative to address.
I’m looking forward to our committee’s research work and future open workshops and meetings to hear from everyone on ideas for the future of geographical science.
Open Data exists for a purpose. From point of capture, to publication and analysis, data seek to be used to make better decisions. By making the data open, more people can participate in that analysis and decision making process. Particular to government and community, the more people can understand, collaborate and reach consensus, the better the likely outcome.
To support this goal of open data a few of us are starting a new group in DC that will focus on very specific initiatives and issues related to our city and use data and analysis to gain insight and hopefully provide effective solutions. Each month we will choose a particular issue and dive deep into that issue to understand the current state, historical precedence, objectives, policies and start a data-driven dialogue. To guide our explorations, a government representative will introduce the topic, share current work and plans, and be available to answer questions from the group during and after the gathering.
Our first meeting is this Thursday, October 30 and we are focusing on DC’s VisionZero plan to reduce pedestrian and cyclist fatalities to none by 2017. Jonathan Rogers from DC Department of Transportation (DDOT) will be our government and data expert. In preparation for this meeting, DC OCTO has made available a number of open datasets such as the last 8 years of bicycle crashes, bike lanes, bike routes, and crowd-sourced locations of unsafe cycle conditions around the city. You can find the list and contributions in our github repository
If you are interested in data as a hobby or profession, want to apply your analysis skills to data science the heck of this data to help your fellow citizens, please join us! After this initial meeting we plan to migrate through the city, hosting events in each Ward at the local library so that we can make this collaboration inclusive to all citizens and focus on local community issues.
And if you live in another city or town, I encourage you to start something similar focusing on local issues. Ideally, working at the convergence of government, analysts, technologists, and citizens means we can give open data the purpose it wants to achieve.
The wristwatch was an invention of convenience for extreme conditions. Previously the pocket watch provided an elegant and portable mechanism for discerning the current time. However increasing complexity of military maneuvers in the 19th century and more civilly and popularly with the advent of planes, pilots wanted precise time measurements without getting in the way. So ingeniously they strapped the watch to their wrist in order to free the hands to fly the plane yet still provide quick access to time.
Over more than a hundred years, the wrist watch had become as much a mechanism of fashion as information. It is maintained as the only general cultural acceptable display of machinery on our bodies. From digital devices in candy machines to bejeweled masterpieces, the fundamental concept of a watch is global.
However the utility of personal display of time is stagnant and arguably antiquated. It is difficult to avoid seeing the time displayed with nearly ever glance to a wall, building, or device. Yet alternative wearable devices have attempted to emerge into consumer mainstream for decades. Often cumbersome, complicated, and limited, no amount of marketing has sustainably become part of our wardrobe.Who needs pants?
A survey found in 2014 that people perform over 220 tasks per day on their phone and carrying one more often than they are wearing pants. There is also the emerging preference of current tech versions over trendy clothes. The handheld window to the web and our digital communities found a visceral niche in our psyche that dramatically altered our behavior.
Like our aviator predecessors, the concept of a hands-free, glance-able information display felt right. While the underlying technology available is tremendously complex, when it comes to our bodies we are reticent to display or deal with complex and awkward devices.
So when it comes to expanding technology out of our pockets and laptops it makes perfect sense to adopt acceptable form factors and instead transparently transform the interactions we have with these objects.Fashionable Cyborgs
While a smart phone is something that carried with us nearly constantly, a smart watch has the unique and paradigm altering position of becoming part of my biology. Smart Watches are a watch only in shape. To consider them as a timepiece is a gross misconception. A smart watch is a network-connected, sensor-laden, interactive computer which is always visible and constantly in touch with my physical body.
At any moment it measures my heart rate, temperature, motion, skin galvanic, salinity – as well as the external environment including light luminance, light direction, air temperature, humidity and sound.
These data are then captured, connected, and streamed to mobile and remote processors. This is our first, subtle step to cyborg. So the question becomes, what do we want to do now?Personal Big Data Device
I rarely want to know What time is it?
Rather I want to know How much time until my next event? Time is merely data, what I want is information. The conversion of time data requires balancing my schedule, current location, transit, pending tasks, and a myriad other factors. Computers, and more recently smart phones provide tools for balancing all of this data and providing alerts or suggestions based on our preferences. However these devices can afford to be verbose in their management and interaction. We immerse ourselves into the action of using them in order to extract information.
By contrast, wearable devices like smart watches are antithesis to this immersion. They must instead by passive – raising only relevant information in the appropriate context that balances time, location and urgency. I permit this device to live on my body but the requirement is that it must be a good citizen and behave itself. Failure is banishment.
To succeed, wearable devices must become Big Data Devices. They must capture and process huge amounts of data in order to discern the small, highly relevant bits of information that require my human awareness and intervention. They must really be smart in truth, not merely in name.Internet of Humans
There are tremendous opportunities for smart devices. Consider that smart watches have a constant monitor on my current health. I don’t care about my current heart rate – but I do want to know if over the past four months a heart arrhythmia is detected and I am notified to contact my physician. In fact, all historic heart data can be sent ahead and used as part of the diagnosis.
What if a smart watch could detect a heart attack and automatically send an alert to emergency response including location and other important health and contact information?
Apple in particular is clearly signaling their intent on this direction. They are providing frameworks such as HealthKit and AirStrip as well as more mundanely Home Automation, location, and others in order to encourage innovation of domains while they offer the platform of human bodies pumping out data and attention.
Watch Apple’s WWDC 2015 keynote to see how they are building predictive analytics into their software. Your iPhone will be aware that each morning about 8am you go for a run and listen to particular genres of music. So as you head out your door and start your workout an appropriate album starts playing and your fitness tracking app starts logging.
Consider that applied to home automation including thermostat, lights, security system – or purchasing behaviors through Apple Pay that correlate to your activity and health – or even restaurant venue quality measured through ambient consumer devices that are on everyone’s wrist throughout the day.
Five years ago, Apple acquired a company called Color. There was much derision due to the pre-launch nature and cost of the acquisition. Color’s business was using every sensor on a device in order to capture, share and replay the entire environment around a person. Carried to it’s end, with sufficient full coverage from every digital device throughout the day across the world – they were inventing a form of Time Travel. Not the teleportation kind – but the ability to replay in full clarity, from any vantage point, any previously recorded event and navigate through as an observer. Much like Microsoft Photosynth.
However a phone is often in your pocket or a bag – muffled by fabric, data suspect due to mixed environments, handling, and visibility. Instead, these devices are emerging into the light and given prominence – and most vitally, access – to the world, to the web, and to our physical beings.
The future is not an internet of things, it’s an Internet of Humans.
Today’s landing of a spacecraft on a comet is truly a stupendous engineering feat. ESA Rosetta spent 5 years, including three flybys of Earth and one of Mars in order to slingshot in order to land a 3-ton machine on a 4-km round rock moving at 135,000 km/hour. (meaning that the comet moves ten times it’s entire length every second)
Given the past few weeks of difficulty and failure in the always difficult engineering discipline of using high energy chemistry and physics to propel objects at (literally) astronomical speeds – the Rosetta success is particularly welcome. No one working in the field of aerospace ever thinks their tasks are easy or error-free; their missions are audacious and must be accepting of these grand failures to acheive even grander achievements.Become a rocket scientist
If you want to try your hand at building, launching, and flying across the solar system – I highly recommend checking out Kerbal Space Program. It includes a decently accurate physics models so you can experiment with the trade-offs of spacecraft size to rocket size to fuels. Earth is a large gravity well and requires large amounts of combustible materials to reach orbit where subsequent maneuvers become relatively easier.
Now is where your geometry lessons come in. Rotational dynamics can be mind-bending at first, and you start becoming obsessed with energy management.
Kerbal has a vibrant community who have created hundreds of mods for additional spacecraft parts, celestial bodies, AI pilots, and atmospheric dynamics. Many of these are on Github and you could learn to build your own. If you really get into it, I recommend reading Fundamentals of Astrodynamics or Fundamentals of Space Dynamics.
Roads are particular engineering feat that deeply affect our regular lives yet pass by largely unnoticed. So effectively designed and implemented, we typically only notice them through relative minor, but annoying, failures such as potholes or flooding. Periodically, a major catastrophe reminds us the nature and importance of these infrastructure components and the imperative to design well, maintain regularly, and replace when necessary.
There are extremely well established practices for the design of roads, bridges, and nearly all physical infrastructure that compose our built environments in cities and communities. Centuries of practice, wisdom and science have been boiled down to codes and standards that prescribe the design of a road. Civil engineers rarely have the opportunity to truly design a road; more often they receive guidelines to be accomplished based on traffic volumes, load limits, environmental conditions, and available materials. Using processes of checklists, tables, and formulas, they crank through these to determine the basic characteristics of bed depth, width, rebar size and density, curb heights, etc.
At the beginning of the 20th Century as expansion thrived across the western United States, government agencies were dealing with a multitude of varied permits, maps and engineering plans for new infrastructure. A land surveyor, having just started as a state engineer of Wyoming in 1903 “was immediately confronted by the unruly nature of engineering and land surveying in this vast, largely undeveloped state where hungry prospectors and developers were rushing to gain access to state water for irrigation purposes.”
To address this issue, they developed the National Society of Professional Engineers and the Licensed Professional Engineer.
A century ago, anyone could work as an engineer without proof of competency. In order to protect the public health, safety, and welfare, the first engineering licensure law was enacted in 1907 in Wyoming. Now every state regulates the practice of engineering to ensure public safety by granting only Professional Engineers (PEs) the authority to sign and seal engineering plans and offer their services to the public.Qualities of a Professional Engineer
The Professional Engineer, or PE, is a certification that requires practitioners to prove their capabilities to design within standards, abide by a code of ethics, and have the experience and mentorship of another PE. After at least four years as an active engineer working on projects and passing a rigorous proficiency test, people are then qualified to approve engineering projects.
Today there isn’t a road, building, bridge, or airplane built that has not met with the approval and stamp of a PE. As such, the engineer is putting their name and qualifications on the line that the design and construction meet industry standards and is reliable enough for the public to safely use. This responsibility is important to ensure that our local and national infrastructure incorporate the long history of engineering wisdom to protect safety and ensure operation.
As I mentioned in my previous post, the government information architecture is a new type of civic infrastructure that citizens and communities increasingly rely upon. Through web and mobile applications, news feeds, and online forms we use the internet as a primary tool for engaging with city services. Fortunately API’s and other open information architecture developers can create unique and novel applications to improve community livelihood, visiting tourists, and growing business opportunities.
However, beyond novel applications, as a civic technology community we are now building new tools that support basic services such as 311, housing, emergency response and safety. If these services are an integral part of our society, should we now expect the same level of quality and stability we do of our roads and buildings?
Recent history is replete with application contests, prototype apps, local civic hacks, and even startup or large-scale companies that built technology that did not sustainably scale to the region it was meant to serve. Is this something to be expected or should we be developing a code of conduct and patterns by which we guide and even enforce quality and long-term maintenance of this new class of infrastructure?Civic Imagination
“The Street finds its own uses for things— uses the manufacturers never imagined.” – William Gibson
This is not meant to curb the surge of energy and innovation that has also entered government through the rapid pace of technology development. The potential for new ideas to quickly emerge and evolve can dramatically improve civil society across every level of government. If government is the platform, then it is imperative to leverage this platform to build applications into the hands of real users. Government serves the long-term requirements of citizens but technology has the capability to address emergent needs and interfaces. By contrast we’ve likely all experienced attempting to use a government website that states our browser is ‘too new’ and may not work with the application.
So we need to balance this unmitigated rush with consideration for the impact and expectations it will have from these very real people with real expectations. What is a good balance between “cowboy coding” and “stagnant kludge”.Towards a code of conduct
Fortunately, I believe that the field of open citizen development is maturing. Organizations such as Code for America, Sunlight Foundation, OKFN, and agencies like the new 18F and UK GDS provide the forms to professionalize ‘civic hacking’ and develop codes of conduct for volunteers and companies that can address the issues noted above.
(All the jokes aside) using standards is the first necessary step to an operational and sustainable civil application. This means more than just JSON (which is an encoding) and more about the actual schema and structure. There are numerous existing standards, and increasing a set of commonly used and understood standards. Civic organizations should develop guidance on the baseline required standards and the optional additional standards to use.
In some case it makes sense to develop or evolve standards – but consider how this impacts existing tools and how to build community and adoption with new and existing applications.
Few things are better than real experience. The wisdom gained from successes and failures provide the best guidance for building reliable applications. Each day a new developer joins the community, excited to build tools for themselves and their neighbor. Beyond simple “how to program” we should be providing the mentorship on technology architecture, testing, usability, government processes and accessibility. The Professional Engineer requires at least four years of mentorship and their approval for your acceptance of the license. What is the equivalent “merit badge” to demonstrate the acquired skills to appropriately design, develop and operate these technologies?
We build applications with the awareness that they have a lifespan. We will conceive, launch, grow and finally retire everything that we build. Nothing is eternal. At the beginning we should design into our applications the long-term maintenance and final transition plan. What is the plan to scale or grow to meet new requirements? How does data gathered be preserved and migrate to the next generation application so that we don’t lose years of valuable information and history. These don’t need to be the ultimate plan, but should include the consideration and plan that will evolve as much as the application does itself. But it is a maturity to plan for that final obsolesce on day one and as part of the entire engineering design.
TechCamp Ramallah – mentoring the next generation of Civic Engineers…
Today, anyone with a computer and a bright idea can build an application to improve the lives of citizens. As a community, we should work together to ensure their idea meets the needs of those citizens and can grow to become part of the broader civic technical platform. If you’re at this week’s Code for America Summit I would love to chat about this topic – or feel free to reach me directly here, on Twitter or via email.
Last week I made a quick statement sharing my concern for civic organizations promoting ETL – Extract Transform and Load – of open data instead of developing APIs. I felt it warranted a more thorough response than the terseness of microbursts.Desire Lines and Road Surfaces
Walk through most parks and any college campus you will quickly notice that dirt worn pathways that connect between the sidewalks indicating pedestrian shortcuts. These desire lines indicate an initial and repeated optimization that lay outside the paved paths. Often these ad-hoc networks are a type of ‘footstream’ that are adopted and paved – or they are left to individual use, muddy in rain and undocumented or supported through groundskeeping. At scale this often explains entire city and county or even national road networks that started as ‘cowpaths’ and through continued and growing usage became official infrastructure – roads and highways – which are relied upon as a matter of business.
This road network is the infrastructure that government develops and promises to support as a necessary mechanism for citizens to build communities and businesses to operate commerce. Information infrastructure is the next generation that government is developing which increasingly becomes the relied upon and required tools for community and commerce. Tim O’Reilly has referred to “Government as a Platform” which means that we must be able to rely on these services as a durable backbone where which we build our numerous and diverse applications.Opening Data: Prototype or Infrastructure
Open data started as simple file sharing. In my own city the data catalog was a large and easy to read list of datasets with metadata, links to common formats, and updated dates. Through a series of public contests, developers used these file downloads to build some compelling applications to highlight the future of Government IT. In 2008 Apps for Democracy was iLive.at (link dead) and ParkItDC.com (link dead) and again in 2009, Apps for America winner was 311.socialdc.org (n.b. link is dead).
It should be apparent that these contests and applications were interesting desire lines that did not provide or sustain a platform of information which citizens could rely on. Albeit just simple examples, they are indicative of the tendency to build simple one-time applications which unfortunately miss that next step of becoming part of the platform they seek to improve. I’ve heard similar examples from other cities where civic hackers created well-meaning and well-built applications but that sit so far outside the existing government operations that they require continued manual maintenance by these unpaid volunteers with the common outcome that the service stops updating (maybe while even still operating, arguably a worse condition than simply shutting down).
Unfortunately, even the original data catalog has slowly atrophied. Based on my own experience when looking for more recent crime data (the data catalog stops about September 2013) I learned that the internal system was being migrated and the transformation process had faltered and it just wasn’t a priority to get the system back online. It was too removed from their actual job of analyzing and responding to crime to make a separate feed available with any defined timeline.
Which is a stark reminder that despite the amazing capabilities technology can deliver, Government is foremost responsible for serving people, not serving technology. Everything it does in the end is to serve the communities that elect, fund, and generally are employed by, these governments. When most civil engineers are designing roads they don’t apply grandiose design aesthetics and creativity. They pull open their codes & standards, determine the appropriate concrete mixture, depth and rebars based on specs, and get to work developing the road that fits the expected and reliable operations that citizens need.Operational Open Data is Sustainable Open Data
There are numerous studies, reports, case studies and general community practice have made the case that open data has a great potential benefits to civilian and business communities. Not least of which is the ability for government agencies to more easily share data between one another in addition to improving business efficiency and consumer decision making.
Government has a difficult and extremely important job. As an entity, it is not enamored with new techniques or formats. Attempting to create unidirectional bifurcations of the data create strain which will eventually give way when there is any pressure: time, fiscal, or personnel. New technologies need to understand these processes and costs in order to align themselves if they are to ever become part of the government platform.
For open data to move from a shortcut to part of the stable infrastructure, we need to design it from the beginning to be practical, sustainable and ultimately operational. Open data needs to be the way government operates, and it needs to be part of the living systems that manage and process the data as part of day-to-day business.
To the original question, generalized techniques such as ETL – extract, transform and load – have tremendous flexibility to explore new paths and opportunities. By enabling freedom to explore applications, new formats, and communities, government can observe and understand these desire lines. It can then make the decision if these paths become part of the supported network or if they indicate a necessary redesign of the large system to accommodate these concepts. Personally I’ve seen, and built, many ETL tools and community applications that worked on the outside of government. While fast moving an extremely agile, they ultimately are untenable to provide ongoing and durable platforms for information access.
By comparison we should be encouraging and working directly with government technical staff to specify and prototype API – application programming interface as they provide an excellent mechanism for this prototype to adoption. By developing an external interface to a service the provider is making a contract with end-users that is independent of the implementation details. This enables a developer to use their tools of choice with the intent that it could be rebuilt within government infrastructure while maintaining the promised interface that applications already rely upon. And finally like the observing the increasing depth and width of a dirt path the measured analytics behind and API help prioritize incorporation and operationalization.
Exemplar of this has most recently been the DCAT distributed catalog specification. Neither new nor novel as far as federated data catalogs are concerned, it was an API that was created in conjunction with technologists and government agencies and adopted independent of any technology implementation that is now poised to easily share links to data between numerous national and local government agencies, all in the public. Instead of building more data harvesters, an API means that anyone can both participate in production as well as usage of the open data however best fits their needs.Desire to Collaboratively Craft
Perhaps the most exciting thing I have observed in my six years living in DC and watching the Open Government movement surge has been the positive growth and excitement of people within government to actively and publicly collaborate. More than merely publishing a catalog and running a competition, government representatives are eager to talk about ideas, share code and data, and hear where they can open their infrastructure for these types of creative developments.
While much of the commercial web is becoming ‘appified’ (and often eschewing access via common or open APIs), perhaps this is one case where it’s superb the government moves more slowly and is just now entering the time of the programmable web. For many of us who volunteer our time and expertise hoping to improve the civil societies in which we live, the best thing we can do is work closely to advise and create the best platform possible.
This weekend I participated in a fun panel on Data Visualization as part of World Information Architecture Day in DC. The moderator was Sean Gonzalez from Data Community DC and included from Amy Cesal from Sunlight Foundation, and Maureen Linke and Brian Price from USA Today / Gannett Digital.
You can see the video here.
There was a clearly interesting gap between our perspectives as storytellers versus tool creators. Data journalists such as Maureen and Brian focus on a story and develop or use tools that focus on that story. From one theme to the next, they reuse these tools but each output is a uniquely crafted experience in order best convey a story.
By contrast, I focus on building platforms and tools that enable anyone to develop their own story. In order to do this, I need to think about the generalization of data management and visualization capabilities to adapt to a wide range of applications. The tools need to permit customization without requiring indoctrination such that the storyteller can focus on their goal without the tool getting in the way.Importance of Data
Common across all of our discipline, and the point that was most reiterated, was the vitality of data. At first finding quality, authoritative data, and subsequently the effort to clean it up, validate, normalize, analyze, and finally portray. The best visualization is useless if the data are suspect.
Fortunately, finding data is getting easier. Driven by open data initiatives and supported through specific and growing open data catalogs from the source means that there is reduced effort discovering relevant information to use for your visualizations.
Imperative to proper journalism, and the web, is the requirement to cite your source. Even more, linking to the source data and authority means that users can trackback to the raw data and create or validate their own findings.Evolution of Medium
Along the concept of web links a few of the audience asked about when which type of visualization was appropriate. When is a static image sufficient and when should you use a complex interactive visualization?
Our discussion explored the idea of responsive visualization where it is important to understand the reader’s medium and situation of consumption: mobile phone on the metro, laptop in the office, or a mix of both? Personally I tend to find interesting articles on my phone and bookmark them for viewing later in full resolution on my laptop or iPad.
By developing responsive visualization, a story can provide a fast and sufficient static image on a mobile device while growing into a deeper visualization on a computer.Resources
We shared many specific resources through the discussion. A few of the highlights include Nathan Yau’s FlowingData, replete with examples, critique and tutorials. O’Reilly has a number of books on data visualization and data science that walk through detailed methodologies and examples. Journalists should check out NICAR.
A year ago I decided to become a bike commuter. I live on the east side of Washington, DC and we just opened our new office on the edge of the Potomac river on the west side of DC. Inspired by my colleagues in Portland that constantly tout the wonders of bicycling their fair city, I believe that DC has an adequate and continually improving bike accessibility.
Over the year I have commuted every day that I was in town, rain or shine, and did not have to be dapper in a suit, which was about 1000 miles. In those miles I came to prefer biking as the best mode of commuting and would dread days that required a metro ride packed to standing room, concrete, lights, and stations floating by the underground wormholes.
By contrast, my surface excursion could take one of several routes. Either a scenic trip along the national mall, cruising past the Capitol Building, Washington Monument, Reflecting Pool, Lincoln Memorial and skirting the rolling hills of Arlington Cemetery. Or the shortest route along the obscure but great H Street corridor that is actually home to numerous global institutions.
My preferred route along H Street took me past Union Station, through China Town, over the newly constructed “City Center”, past the American Academy of Sciences and the World Bank Headquarters, the White House lawn, and finally through historic Georgetown or even along the small canal. It was those quiet mornings as the sun was rising and the fog burning off the Potomac that are only possible when you can stop on a whim to enjoy the city.
Besides the unique aspects of DC, having a bike gave me ultimate flexibility to detour through cafés in the morning, or to meet ups throughout the city in the evening without worrying about metro stop locations or bus routes.
Even travel estimation was simpler, where my times were consistent regardless of non-homogenous traffic. My trip from house to work was almost exactly 30 minutes, and one of the best parts was cruising past the traffic jammed cars that would have easily made a car trip 45-60 minutes over the same route.
Of course, urban biwheel transport is not without its threats. Coming from the east side of the city if I left close to an hour mark (e.g. 8am or 9am) I was sure to encounter Maryland drivers running late, speeding through lanes and annoyed by the bicyclist that was sharing the road. Again a secret pleasure is seeing them speed past to immediately arrest at a stop light where I would pull up next to them again.
This highlighted and oft criticism of DC, where there are numerous bike lanes yet little connectivity between them. It is not uncommon to follow a cycle track for a few blocks to have it merely stop at an intersection with no identified way to continue along other than by occupying a car lane. There is continued efforts to add more bike lanes and signs that I hope will result in a better and fully connected urban bicycle network.
People viscerally engage with their personal technology devices. Recent studies indicate that we spend 6.5 minutes of every hour awake with our phone, and even more time than we do with our partner. Anecdotally I have heard that we have mobile in our hand more often than we are wearing pants.
Fortunately we adapt. At least in my experience etiquette now precludes phones being left on tables during meals or friends reading during a conversation. We modify our behavior to selectively utilize and then put away our technology. Send that message and the device is then relegated back into the dark, invisible recess of our pockets or bag.
What made mobiles initially so pervasive as information access devices was that they were prevalent, but more importantly non-invasive. Unlike a laptop which creates a physical perceived wall between the user and public – a phone masqueraded like a hollowed book as a normal device in which a person could hide the internet. This cloak faded as people acculturated the new interfaces.
Displays continue to shrink and conform. The Kindle broadly introduced the concept of a screen that performs as a book, and little else – affording an acceptance much as someone reading the paper. We now have the emergence of a new paradigm of social device interaction.Fourth Screen
In the early 20th century, at the birth of aviation pilots found it precarious to operate their aircraft while using pocket watches to plan flight paths. As a result, wristwatches were created to enable pilots to quickly and constantly monitor the time without removing their hands from the flight stick. This new heads-up type display became extremely popular over the following century until it was in fact replaced by the aforementioned mobile phone.
But the wrist-watch again has the opportunity to replace our pocketed phones and provide us with a heads-up interface to our connected devices. The recent Pebble Watch displays incoming messages, calls, and can even be extended to call to various web services for weather, delivery information, and location.
This is not entirely new. I had a DataLink watch in 1995 that wirelessly synchronized to my computer via a mesmerizing flickering display. These smartwatches, dating back to 1972, have iterated through popular and usable formats but, like the early smartphones or a web desktop required several cycles to truly establish.More recently Samsung released their Galaxy Gear Watch and you have been able to wear your iPod Nano as a watch for a few years. There are rampant rumors that Apple is working on a Watch.
Google Glass is the harbinger of ocular augmentation that directly overlay on our visual field. There already exists a backlash over the obtrusive glowing screens and the concern if someone is paying attention to you, referencing something, or even photographing you; much to the chagrin of nearby people .Ambient Awareness
Connected watches and other wearable devices offer a larger opportunity for ambient awareness. Our sensor-laden mobile phones, replete with microphone, light sensor, accelerometers, and GPS, are enclosed in dark caves of our pockets; missing so many contextual clues.
By contrast our wrists remain exposed – constantly dappled by the light, reverberating from the punctuated sounds, sensitive to air temperature changes, visible to wireless signals, and even experiencing a wide range of motions from walking to waving, shaking hands, and opening doors.
As a highly-ambient information display, the watch offers an unparalleled platform. Consider also that Apple bought Color, a team experienced with multi-sensor fusion and device content distribution and a future of integrated contextual passive alerts with casual interaction appears imminent.Surreptitious Engagement
In the end, computing is becoming ubiquitous, pervasive, and non-invasive. We accept new technology with its foibles and obligations but we ultimately desire it to blend into our periphery where we can always engage, but never interfere.
Fortunately it is still about the human interaction, even if we get a little machine help in the process.
Last week I immersed myself deep in the culture and community of map design. The biennial gathering is the major cartography conference, International Cartographic Association world, held this year in historic Dresden, Germany. It was my first time with little awareness or expectation of the community, one that throughout the week I both enjoyed but also some consternation.Discovery
The week kicked off with a joint workshop of the Commissions on Map Design and Neocartography hosted by the illustrious Steve Chilton and Ken Field . Throughout the day many of us shared our thoughts and suggestions for the concepts digital, personal, interactive realtime cartography. I will write up my talk separately, but the many other attendees covered insightful areas of work and ideas. Julia Mia Stirnemann, a designer by background, showed the importance of projection in storytelling and perspective demonstrating her WorldMapBuilder. Beate Weninger demonstrated the clear case and design work for better colors in digital cartography – particularly gradient color ramps that affect color vision deficiencies and even situational color blindness caused by ambient environment, lighting, screen displays, and other non-controllable interferences with your maps.
Throughout the conference there were plentiful gems of research and ideas. One of the best talks was by Ian Muehlenhaus on the methodology and rhetoric of the dark side of cartography, persuasive maps.
There were other cross-overs from GIS to cartography in particular dealing with the growing amount of crowd-sourced data that may need new uncertainty analysis and types of uncertainty visualization. Similarly there was an entire session on Temporal animation.
Neogeography was well represented, from a smellmap to [community gathered tribal maps] and more artistic portrayals such as [The Visitors].
I highly suggest perusing the Conference schedule. There are good papers, many with the PDF papers included. You may find inspiration or references to include in your own work.Tribal
The ICA is over 50 years old, and reflects a wide view of the historic and emerging roles and techniques of cartography. The ICA operates through a number of volunteer identified and created commissions which are formed on the basis of identifying a particular aspect of cartography that merits discussion, research, publication, and evaluation. These commissions alone indicate the range of cartography: from the general Map Design and GeoVisualization to the very specific Globes and even Planetary Cartography.
Throughout the week conference sessions are organized around themes within these commissions. The structure works to encourage the community to adapt to emerging trends and areas of interest that can subsequently foster micro-communities or hypothetically reach out to external groups.
if your community is shrinking while your domain is booming then you might need to broaden your perspective
— Andrew Turner (@ajturner) August 27, 2013
In my experience typical of any long-running and well established community, there exists a self-fulfilling echo chamber. During one meeting a remark was made that everyone is making maps, yet the ICA and cartography journals are shrinking. The knee-jerk response was that it the industry is not supportive of cartography – meaning there are fewer “cartography professorships” and research publications. These reactions miss the point that it is the broader, and differential, groups that are adopting cartography as a practice within their domains and would benefit from the active and continuous engagement from communities like the ICA within these external communities.
The ICA Conference was vibrant, and had a clearly emerging group of newer cartographers and even digital technical aspects that indicate a good future for the association – or at least that community. I was truly impressed by the class of International Cartography Master’s students whom spend three semesters stationed at different technical universities learning the many historic and modern capabilities of cartography which culminate in a semester focused capstone project. The students I met were intelligent, capable, and excited to talk and share ideas.
Many of the ICA groups attend other geographic conferences such as NACIS, or FOSS4G and State of the Map – but I hope that more also reach to the developer and web communities to share their expertise, insights, and collaborate.
Aaron and I were discussing how the web facilitates inspiration and sometimes even copying of other sites or applications.
This is a positive outcome of open access that can create evolutionary improvements. However in my experience I have also seen people clone an interface where they missed the larger context of the interaction, or even worse they cloned something that was internally known to be a quick hack or incomplete solution that was delivered to meet a deadline or as the first phase of a multi-phase story that was never completed.Buran and Enterprise
This reminded me of the fascinating history of the U.S. and Russian space programs. During the space race of the 1970′s and ’80′s the Russians were known to be accessing the unclassified engineering plans for the upcoming Space Shuttle. While this was a necessary, and arguably a greater good to science and industry, the US program did not want the Russians to beat them to the delivery of a reusable launch vehicle.
I proposed using the Farewell material to feed or play back the products sought by [the Soviets], only these would come from our own sources and would have been ‘improved,’ that is designed so that on arrival in the Soviet Union they would appear genuine but would later fail. U.S. intelligence would match Soviet requirements supplied through Vetrov with our version of those items, ones that would not — to say the least — meet the expectations of that vast Soviet apparatus.
From “How the Soviets stole a space shuttle”. And apparently this ploy was successful: “Soviets have ablative material in their elevon gaps, just like we did. We fooled them and now use tiles in the gaps.”
Receiving inspiration, and even copying the aspects, of other applications is clearly an effective means to jumpstart features. However this should not preclude your own diligence in engineering and design to ensure you have appropriately incorporated these concepts for an effective, and operational, means.
As any parent I am constantly concerned about my child’s health. With data it is easier to identify emerging problems and also diagnose the underlying problem as a precaution rather than merely reactive. For these reasons we spent the past year on our Quantified Baby project.
Throughout the year we have been maintaining various quantitative and qualitative data points of our son’s habits and growth. From the day he was born we gathered every time he ate, slept, pooped, peed, and took a bath. This included the quantity of food, time of sleep, and even (optionally) the color, consistency and leakiness of his output.
Overall our son has been extremely healthy, happy, and effusive. To date his he only been sick with a common cold once for about 5 days, had no allergies, and slept through the night after the first three weeks when he was effectively gaining weight. While I know that our data capture and analysis doesn’t account for our fortune, we do believe that it was beneficial to our own parenting, regularity, and ability to be aware and informed of his health for any doctor visits. We also didn’t sweat all of the details. There are a few gaps due to lack of sleep, general distraction, or sometimes simply bucking the machine (i.e. “I don’t wanna”).
We learned a lot of insights along the way, particularly in the methods, benefits, and difficulties of quantifying your baby. For our measuring, we used the capable, full featured, and easy to use Total Baby. Designed by an engineer for his own family, it has the practicality of design by someone that is also forced to use it in a bleary eyed state with a crying baby in another hand, and a bottle/diaper/blanket in the other. This is not meant as a review of this particular application but more as a highlight to what the baseline and required features should be of any quantified self applications.Make it Useful
Many ‘quantified self’ applications take a conceited view of their data capture and require you to enter data with little to no information value returned to you. These tools are fun at first but quickly become forgotten since they required action outside of the normal activity.
By contrast, quantified self tools must provide at least immediate value. By example our measurement of feeding provided a timer of the current activity, and also set a (configurable) alarm for the next feeding that would typically occur in 2 hours at the beginning. When you’ve been cycling through several days of polyphasic short sleeps, this type of simple arithmetic becomes harder than you may imagine.
So by providing an immediate benefit (automatic recurrence timer) we were clearly incentivized to keep using the application. A missed feeding would quickly get entered so we knew how he was doing throughout the days and a simple count of times and amount he fed to ensure positive weight gain. You can imagine the potential for existing social networks to measure and alert on emerging trends that may impact your health through continuous input for all of your data.Make it Easy
While we are both engineers, when first learning to deal with another human life your focus tends to be on their needs and not learning complex applications. Actions and questions need to be extremely clear and operable with a single hand and thumb.
It should also be easy to get quick statistics from a glance so that you can take action if necessary. With our tracking we could immediate see the time since last feeding, changes, sleep, and even bath or other customizable timers. This bio-dashboard meant we were reassured through parallel mechanisms that we were tracking data and everything was on track with him.
Automated tracking is even better as they don’t require remembering or only capturing observed actions. Deb Roy’s work in ‘The Birth of a Word’ demonstrates passive monitoring of language development that would not be as clear in limited measurements. I had originally planned a few hardware sensors that would measure crib pressure for motion in sleep and rolling over – but as they say “finish all your projects before the kid is born; you won’t get any done afterwards”.Make it Shareable
The data are useful to one person, but arguably are just a substitute for memory. However with two or more people, a quantified self application becomes the collaboration center-point to have multiple inputs and actions on the information. When my wife would change diapers or finish a feeding my own phone would update with these events and I would be continuously informed without having ask, or even worse, wake her up.
The data also worked on any of our iOS-based devices. So instead of having to chase down a single unique device we could instead grab anyone iPhone or iPad that we had nearest to us – which usually wasn’t far as they are also great devices for reading while rocking, or playing some soothing music – proving the mobile phone in and of itself the greatest invention for parents.Make it Open
A quantified self application should be expandable, and open to customization. While by default Total Baby has a good configuration of feedings, colors (if you need to be that specific), food types, and activity types – you are also able to add your own activities, timers, and as I mentioned before, automatic reminders. That means if you have special needs, medications, therapy, or just want to ensure validation that grandparents are getting equal time, then it’s important that the application allow you to add these as necessary.
A surprising and welcome feature of Total Baby was that all of the data were exportable as a Spreadsheet CSV and SQLite database file. While not the cleanest data model (times are just strings in the event descriptions which are not enumerated), it allowed us to play with charting and general metrics as our son grew. The growth charts and other timelines are also all exportable and email-able to family, doctors or caretakers. I can also make backups in case any of our devices are lost.Opportunities for Health
“It’s your body, make sure you know what is going on with it.”
In personal health, I’m continually astonished by lack of personal data that is measured and retained. How many times do I need to fill out basic details such as address or blood type that I am dubious that real information such as blood levels, heart states, or other extremely important information is gathered, available, or used for any kind of passive analysis and alerting.
While I do trust my health practitioner, no one is as invested in the health of my body as I am. Doctors are busy and laden by antiquated technology that prevents them from providing the care they would really like to. There have been numerous attempts are revolutionizing the healthcare industry that fail through lack of focus, complex policy, privacy issues, or just lack of general initiative.
I want to see the same technology that reroutes traffic, recommends new shoe ads, and sorts my email to understand when I need to alter medications, change my behavior or visit a specialist who upon my arrival would have a full, and appropriate, analysis and access to my data history.
By focusing on the quantified self, we are empowered to monitor and maintain our own well being – and obviously with children to assist in theirs as they grow. I can invest the time, effort, and even technological development that creates a solution now that will aid my own family.
The technology and methodology of digital interactive cartography is nascent but evolving. The fields of human-computer interaction and GIS are converging from both the consumer market: “where is my nearest good restaurant?”, as well as from the enterprise: “where should I open a new restaurant chain?”. Designing useful interfaces requires understanding user workflows, iterative development and testing. Too often this effort can be retarded by conflicting viewpoints, changing market and business models or even lack of imagination.
While the consumer and enterprise markets are slowly iterating through concepts, there is nothing so significant as a crisis where minutes and meters can mean the difference in saving a life. What can we learn from the agile and emergent development of tools during short-lived response events which provide insight into further research and development. This post are some notes from my talk at the American Association of Geographers annual meeting in Los Angeles.Crisis Cartography – AAG 2013 from Andrew Turner Familiarity and Expectations
During a crisis people use the tools that are available and ideally familiar. This often means repurposing in a way that was never intended but yields innovative new applications. For example during Hurricane Katrina people caught in New Orleans could not call 911 or other emergency numbers. Instead they would text their family members in remote states such as Michigan who would then call back down to Red Cross with the address of the person that was stuck in the flooding. By contrast typical mapping and analysis would take days to gather data such as shelter locations, flood modeling inundation zones and finally the proposed response. With people knee deep in water this restricted and limited capability had demonstrable and severe impact on the efficacy of the response.
Subsequent disasters demonstrated the effective repurposing and development of cartographic tools such as the New York Public Library historic map warper to instead rectify un-classified CIA maps of Haiti that were used to derive road networks in OpenStreetMap. Then using consumer-grade hiking GPS units this data was made available to search and rescue teams from Virginia to find the locations of trapped individuals. What had been designed for researchers and public volunteers provided an easy to use, on-demand, and flexible mapping interface for people around the world to provide overnight support to responders that were deploying on the ground.
This cycle of prediction, warning, response, relief and reconstruction is a well known pattern in disaster management. Unfortunately current typical cartographic products are either static aggregates that provide only cursory assessment at a coarse geographic area, of little use in actual response and planning; or they are disjointed, out of date, and paper-based. During a disaster there is cognitive overload from the inundation of information and with a static product no way to filter or zoom into a particular area of interest. It is a by product of a bygone process of unidirectional information flow through a priori information channels.
Fortunately the information landscape is dramatically evolving. In the past few years alone the ability to dynamically share data, collaborate and develop maps has enabled new mechanisms for understanding and response. In reflecting on the Haiti response, particularly relating to the public information sharing, numerous emergent technologies as well as traditional organizations were able to innovate and provide better access to important data and support. This will be an increasingly imperative capability as the types of data, and expectations of response are changing as well.
In recent American Red Cross surveys, they discovered the following surprising traits:
Simply, the people involved in the informational aspects of crisis response are changing. Technologists, domain experts, diaspora, and numerous other citizen communities are now actively and intensely engaging within moments of a disaster to build maps and gather geographic data. And the people in the affected area are leaning to their daily tools such as mobile phones, twitter, maps, and Facebook to find out and share the event as it unfolds.
The result is that cartography, and in particular interactive geographic analysis and visualization, is evolving. Ushahidi is a well-known platform for gathering and publishing crisis data from eye-witness reports and aggregated media. Emerging from the 2008 Kenyan elections, it is a simple example of the need and creation of a new tool for public and media to temporally animate, investigate and track realtime reports.
However there are still many shortcomings of the current tools. While there are clear positive impacts they have had – the combination of ad-hoc technologies that are separated from user-interaction design or disaster management workflows results in a lessened benefit from the thousands and millions of volunteer hours that are being contributed and could benefit saving lives.Crowd Sourced Data
At the heart of the volunteer technical communities, and arguably the capability they are the most suited to support is creating, curating, and making data accessible. Using tools designed for casual bike-riders and pub connoisseurs has actually proven remarkably effective. Fortunately there was designed flexibility meant to accomodate peculiar attributes such as wheelchair accessibility or tree diameter meant that it could be used to house a humanitarian data model.
The US State Department is supporting microtasking analysis of imagery for IDP camps and structural damage. How do you train new volunteers to do remote sensing analysis and data input in a repeatable, and accurate way?
On the ground, US wildfires have demonstrated the potential for crowd-sourced photography as well as geolocated Tweets to provide updated fire progress, impact and evacuation. What are the interfaces for visualizing official models compared with potential fire models from uncertain data? And how do you account for the over-abundance of information from singular sources, “the Racerboi8 problem”.Dynamic Visualizations of Dynamic Events
What are mechanisms for defining more structured and even dynamic information? During Hurricane Sandy High School students at IMSOCIO called gas stations to get the current availability of fuel and power that was published in a KML feed and map. How do we visualize this data to reflect recency (i.e. is older data have questionable accuracy), or trustworthiness (did the station owner not want people to over-run his shop so gave false information)?
Can we even detect an event through implicit information such as the increased viewing of imagery over specific areas such as identifying the location of the meteor through map tiles.Mobile Citizens
Most importantly, even going back to 2005, people are increasingly connected through mobile devices. These new personal tricorders offer a continuous connection to people, alerts, and access to maps that can provide important information in evacuation as well as response; but they require a different interface for viewing and annotating this data.
And sometimes Paper is still an incredibly important medium. During a disaster text connections are surprisingly resilient, but power may not be. Having tools for personalized, and even ‘zoomable’ paper map may mean someone being able to find shelters or meetup with family members such as through Safety Maps.Room for Exploration
Crisis events compel people to go above and beyond their normal efforts in order to help communities in a time of need. Maps serve a fundamental underpinning for responders and citizens alike. Cartography has the opportunity to evolve in order to address these important issues, and create new, innovative methods and technology that also have broad application.