Here's the solution to the Uber and Airbnb problems -- and no one will like it

It's been a fascinating week to watch the war between Uber and the De Blasio administration play out. Not surprisingly, Uber ended up carrying the day using a combination of its dedicated user base and its sophisticated political machine. This is yet another very early round in what will be a long and hard war -- not just between Uber and NYC, or Uber and other cities, but between every high-growth startup innovating in a regulated sector and every regulator and lawmaker overseeing those sectors. Watching the big battles that have played out so far -- in particular around Uber and Airbnb -- we've seen the same pattern several times over: new startup delivers a creative and delightful new service which breaks the old rules, ignoring those rules until they have critical mass of happy customers; regulators and incumbents respond by trying to shut down the new innovation; startups and their happy users rain hellfire on the regulators; questions arise about the actual impact of the new innovation; a tiny amount of data is shared to settle the dispute.  Rinse and repeat, over and over. I am not sure there's a near term alternative to this process -- new ways of doing things will never see the light of day if step 1 is always "ask permission".  The answer will nearly always be no, and new ideas won't have a chance to prove themselves. Luckily, though, we have somewhat of a model to follow for a better future.  It's the way that these new platforms are regulating themselves.  My colleague Brad has long said that web platforms are like governments, and that's becoming clearer by the day (just look at Reddit for the latest chapter). The primary innovation that modern web platforms have created is, essentially, how to regulate, adaptively, at scale.  Using tons and tons of real-time data as their primary tool, they've inverted the regulatory model.  Rather than seek onerous up-front permission to onboard, users onboard easily, but are then held to strict accountability through the data about their actions:

Contrast this with the traditional regulatory model -- the one government uses to regulate the private sector, and it's the opposite -- regulations focus on up-front permission as the primary tool:

Platform-Gov-regulations.004

The reason for this makes lots of sense: when today's regulations were designed (largely at the beginning of the progressive era in the early 20th century), we didn't have access to real-time data.  So the only feasible approach was to build high barriers to entry. Today, things are different.  We have data, lots of it.   In the case of the relationship between web platforms (companies) and their users, we are leveraging that data to introduce a regulatory regime of data-driven accountability.  Just ask any Uber driver what their chief complaint is, and you'll likely hear that they can get booted off the platform for poor performance, very quickly. Now, the question is: how can we transform our public regulations to adopt this kind of model?  Here's the part that no one will like: 1) Regulators need to accept a new model where they focus less on making it hard for people to get started.  That means things relaxing licensing requirements (for example, all the states working on Bitcoin licensing right now) and increase the freedom to operate.  This is critical for experimentation and innovation. 2) In exchange for that freedom to operate, companies will need to share data with regulators -- un-massaged, and in real time, just like their users do with them.  AND, will need to accept that that data may result in forms of accountability.  For example, we should give ourselves the opportunity to enjoy the obvious benefits of the Ubers and Airbnbs of the world, but also recognize that Uber could be making NYC traffic worse, and Airbnb could be making SF housing affordability worse. In other words, grant companies the freedoms they grant their users, but also bring the same data-driven accountability:

Platform-Gov-regulations.005

That is going to be a tough pill to swallow, on both sides, so I'm not sure how we get there.  But I believe that if we're honest with ourselves, we will recognize that the approach to regulation that web platforms have brought to their users is an innovation in its own right, and is one that we should aim to apply to the public layer. Over at TechCrunch, Kim-Mai Cutler has been exploring this idea in depth. In her article today, she rightly points out that "Those decisions are tough if no one trusts each other" -- platforms (rightly) don't trust regulators not to instinctively clamp down on new innovations, and regulators don't trust platforms to EITHER play by the existing rules OR provide in-depth data for the sake of accountability. In the meantime, we'll get to observe more battles as the war wages on.

#airbnb#policy#regulation#uber

Regulation, the Internet way

Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit.  Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work.  Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement.  We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy.  We've long described web services as akin to governments, and it's within this area where this is most apparent.  Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world.  On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do".  I've described this before as "regulation 1.0" vs. "regulation 2.0":

I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far.  It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications.  At least two aspects of the proposed rules are really problematic:

  1. TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.

  2. TLC will limit any driver to having only one active device in their car

On #1: apps ship updates nearly every day.  Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development.  It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system.  The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition.  We don't need to impose barriers and limits.  And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers.  Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:

  • Anti money laundering requirements are improved but vague.

  • A requirement that new products be pre-approved by the NYDFS superintendent.

  • Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.

  • Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.

  • The lack of a defined onramp for startups.

Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0.  Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade:  trading up-front, permission-based systems, for open systems made accountable through data.

Screen Shot 2015-06-04 at 8.34.06 AM

The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data.  So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data.  This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.

#nyc#nys#policy#regulation-2-0#talks-decks-graphics

Dick Pics and Cable Company Fuckery

John Oliver has become the most important voice in tech policy (and maybe policy in general). His gift, his talent, his skill: turning wonky policy language that makes people glaze over into messages that people connect to and care about it. Last fall, he did took what may be the most boring, confusing term ever, Net Neutrality, and made it relatable as Cable Company Fuckery.  8mm people watched that video, and it was a big factor behind the over 4mm comments left at the FCC on an issue that even most tech people had a hard time explaining to each other. Now, he has tackled another mind bending, but really very important topic: surveillance.  It's amazing really.  Huge, complicated, important issue. Real-life spy stories, with real life hero/villains.  And no one gives a shit at all.  But when you say it the right way -- in this case: should the government be able to see your dick pic? -- people light up. This is 30 minutes of truly instructive brilliance: The best part - he hands Snowden a folder labeled top secret including a 8x10 photo of his own penis.  And asks Snowden to re-explain every NSA spy program in terms of "the dick pic test". On the one hand, you could argue that it's sad that policy issues need to get boiled down to "dick pics" and "fuckery" for people to get them. On the other hand, it's even sadder that the people investing time, energy, and effort in working on these issues (myself included) don't grasp that and use it to make sure ideas connect.  Thankfully we have John Oliver to help us with that. This piece is brilliant -- in particular the way he opens Snowden's eyes to the extent to which people don't get this issue, misunderstand who he is and what he did, and need it to be presented to them in a different, simpler way. The major point here is that no matter your feelings on what Snowden did, it's all for naught if it doesn't trigger an actual conversation.  And while it's easy for folks in the tech / policy community to feel like that conversation is happening, the truth is that on a broad popular level it's not. So once again John Oliver has shown us how to take a super important, super complicated, and basically ignored issue and put it on the table in a way people can chew on.  Bravo. From here on out, I'm going to start looking at every policy issue through the lens of WWJD -- what would john oliver do -- and pick it up from the vegetable garden of policy talk and into the headspace of people on the street.

#ed-snowden#john-oliver#policy#surveillance

Increasing trust, safety and security using a Regulation 2.0 approach

This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government. Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger.  First, my heart go out to the passenger, her friends and her family.  And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent.  As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:

  • Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.

  • Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.

  • Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.

An event like this has compromised all three.  The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes.  In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict).  Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight.  You might describe it like this:

This model has worked well in relatively low-risk for personal harm situations.  If I buy something on eBay and the seller never ships, I’ll live.  When we start connecting real people in the real world, things get riskier and more dangerous.  There are many important questions that we as entrepreneurs, investors and regulators should consider:

  • How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?

  • How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?

  • How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?

Here are a few ideas to address these questions: With all of this, the key is in the information.  Looking at the diagram above, “high accountability” is another way of saying “built on information”.  The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data.  One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register.  The idea of high-information regulation only works if there is an exchange of information!  So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data.  Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector.  Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out.  But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency.  One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible.  This is part of the trade.  I can see my eBay seller score; Uber drivers can see their driver score; etc.  A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process.  For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector.   Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process?  One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems.  Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post.  Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings.  Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use encryption and  aggregate data for analysis to avoid some of the privacy issues inherent in this approach.  While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully.  For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding.  Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches. // thanks to Brittany Laughlin and Aaron Wright for their input on this post

#policy#regulation-2-0

Regulation and the peer economy: a 2.0 framework

As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year.  For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach.  I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation.  His full report is here. Relatively quick read, a good starting place for thinking about these ideas. I am off to meet Max for breakfast as we speak! More tomorrow.

#max-pomeranc#policy#regulation-2-0

Web platforms as regulatory systems

This is part 3 in a series of posts I'm developing into a white paper on "Regulation 2.0" for the Program on Municipal Innovation Harvard Kennedy School of Government.  For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you!  They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today's social/collaborative web platforms. Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement. Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets. As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another. In 2010, my colleague Brad Burnham wrote a post suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success. Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over. As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea. So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them: Built on information: The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century. Trust by default (but verify): Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go. Busier is better: It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems -- and eventually all regulatory systems that are built on software and data -- work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see. Responsive policy development: Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad. If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information. So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.

#policy#regulation-2-0#trust#trust-and-safety

Technological revolutions and the search for trust

For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government.  This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available.  An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop.  I've had the honor of attending this meeting a few times in the past, and I must say it's inspiring and encouraging to see urban leaders from across the US come together to learn from one another. One of the PMI's latest projects is an initiative on regulatory reform -- studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary.  As part of this initiative, I've been writing up a short white paper on "Regulation 2.0" -- the idea that government can adopt some of the "regulatory" techniques pioneered by web platforms to achieve trust and safety at scale.  Over the course of this week, I'll publish my latest drafts of the sections of the paper. Here's the outline I'm working on:

  1. Regulation 1.0 vs. Regulation 2.0: an example

  2. Context: technological revolutions and the search for trust

  3. Today’s conflict: some concrete examples

  4. Web platforms as regulatory systems

  5. Regulation 2.0: applying the lessons of web platform regulation to the real world

Section 1 will be an adaptation of this post from last year.  My latest draft of section 2 is below.  I'll publish the remaining sections over the course of this week. As always, any and all feedback is greatly appreciated! ==== Technological revolutions and the search for trust The search for trust amidst rapid change, as described in the Seattle ridesharing example, is not a new thing.  It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game. We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century.  Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1] This sounds very big and scary, of course, and it is.  Perez’s study of technological revolutions over the past 250 years -- five distinct great surges lasting roughly fifty years each -- shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations.  It’s a wild, turbulent and unpredictable process. Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern: First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession.  Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes.  And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.

image: The Economist

So essentially: wild growth, societal disruption, then readjustment and broad adoption.  Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:

“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]

In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one.  Same goals, new methods. As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution.  In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:

Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.

This sounds a lot like today, right?  A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust.  During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions -- on the private side, firms with trustworthy brands, and on the public side, regulatory bodies -- that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society. Coming back to today, we are currently in the middle of one of these 50-year surges -- the paradigm of networked information -- and that we are roughly in the middle of the above graph -- we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies.  A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety. Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era.   === Footnotes:

  1. Perez, p.4

  2. Perez, p. 16

  3. Weibe, p. 13.  Hat tip to Rit Aggarwala for this reference, and the idea of the "first bureaucratic revolution"

#policy#regulation-2-0#trust

Crowdsourcing patent examinations

Yesterday I spent part of the afternoon at a US Patent & Trademark Office roundtable discussion on using crowdsourcing to improve the patent examination process.  Thanks to Chris Wong for looping me in and helping to organize the event.  If you're interested, you can watch the whole video here. I was there not as an expert in patents, but as someone who represents lots of small startup internet companies facing patent issues, and as someone who spends a lot of time on the problem of how to solve challenges through collaborative processes (basically everything USV invests in). Here are my slides: And I'll just highlight two important points: First: why do we care about this?  Because (generally speaking) small internet companies typically see more harm than benefit from the patent system:

And second, there are many ways to contemplate "crowdsourcing" with regard to patent examinations. In the most straightforward sense, the PTO could construct a way for outsiders to submit prior art on pending patent applications -- this is the model pioneered by Peer to Patent, and built upon by Stack Exchange's Ask Patents community. The challenge with this approach is that while structured crowdsourcing around complex problems is proven to work, it's really hard to get right.  A big risk facing the PTO is investing a lot in a web interface for this, in a "big bang" sort of way (a la healthcare.gov), not getting it right, and then seeing the whole thing as a failure. To that end, I posed the ideas that getting "crowdsourcing" right is really a cultural issue, not a technical issue.  In other words, making it work is not just about building the right website and hoping people will come.  Getting it right will mean changing the way you connect with and engage with "the crowd".  As Micah Siegel from Ask Patents put it, "you can't do crowdsourcing without a crowd". We also talked about the importance of APIs and open data in all of this, so that people can build applications (simple ones, like notifications or tweets, or complex ones involving workflow) around the exam process. Tying those three ideas together (changing culture, going where "the crowd" already is, and taking an API-first approach), it seems like there is a super clear path to getting started:

  1. Set up a simple, public "uspto-developers" google group and invite interested developers to join the discussion there.

  2. Stand up a basic API for patent search that sites like Ask Patents and others could use (they specifically asked for this, and already have an active community).

That would be a really simple way to start, would be guaranteed to bear fruit in the near term, and would also help guide subsequent steps Or, to put it in more buzzwordy terms:

PNGs.014

It felt like a productive discussion -- I appreciate how hard it is to approach an old problem in a new way, and get the sense that the PTO is taking a real stab at it.  

#crowdsourcing#patents#policy#uspto

Support services for the Indie Economy

Over the course of the past year, I've been interviewed a bunch of times about the "peer economy" or the "sharing economy" (Fastco, Wired, NY Times, PBS Newshour), with most of the focus on the public policy considerations of all this, specifically public safety regulations and the impact on labor. A question that comes up every time is: "aren't all of these new independent workers missing out on the stability provided by full-time employment?"  (e.g., healthcare, steady work, etc). My answer has been: yes, for the moment. BUT, there is an emerging wave of networked services which will provide this stability to independent workers, albeit in a different form than we're used to seeing. My colleague Albert describes this as the "unbundling of a job" -- the idea that many of the things that have traditionally been part of a job (not just steady money and healthcare, but also sense of purpose, camaraderie, etc.), will in the future be offered by a combination of other organizations, services and communities.  Albert takes the idea a lot farther than I will here, where I just want to focus on some of the more immediately practical developments. Thus far, this idea hasn't gotten a lot of press attention, as the number of visible services providing this kind of support has been small.  But it is growing, and I expect we'll see at least a small handful of these kinds of services gain traction in the next year. The oldest and most venerable organization doing this is the Freelancers Union.  New Yorkers will recognize their subway ads that have run for decades, advertising their programs and member benefits.  Freelancers Union's roots are in the pre-networked era, focusing largely on independent creative types in NYC, and their scope has grown dramatically over time, growing nationwide and adding services like insurance and medical plans. What we expect to see a lot more of are services that are tailor-made to support independent workers who reach customers and deliver their work through web and mobile platforms.  For example, Peers, which is essentially Freelancers Union for the peer economy. So, what kinds of services are we talking about exactly?  Here are a few of the kinds of services we've been noticing and think we'll see more of:

  • Insurance: One of the biggest challenges in this space has been how to insure it.  We're seeing established firms consider how to address the space, as well as brand new insurers that are tailor-made for it.

  • Job discovery & optimization: Many networked, independent workers make real-time decisions about what kind of work to do (e.g., driving vs. assembling furniture), as well as which platforms to use (uber vs lyft).  This is currently a manual, non-optimized process.  Increasing discoverability and lowering switching costs will also be an important competitive vector to ensure workers' interests are being met by platforms. (e.g., sherpashare)

  • "Back office" - taxes, accounting, analytics:  Dealing with paperwork is a huge headache for busy independent workers, and we're seeing a bunch of saas-type offerings to help people manage it all (e.g., 1099.isZen99, Benny)

  • Healthcare: Gotta have it.  This is a topic in its own right, and not expressly specific to the indie economy, but we are seeing massive experimentation and innovation in how independent actors can buy healthcare (e.g., teladoc, medigo to name 2 of many)

I suspect that by the end of 2015 we will not only have a much longer list of example issues and services, we'll see that some of these have gotten traction and started to make a difference for independent workers. So, if you're a reporter covering this beat, I think this is an interesting angle to pursue.  If you're a lawmaker or policymaker, I'd think about this as an important and growing part of the ecosystem.  And if you're an entrepreneur working in this space, we'd love to meet you :-)

#labor#miscellaneous#peer-economy#policy

I agree with Ted Cruz: let's supercharge the Internet marketplace

There has been a lot of debate about how to protect Internet Freedom. Today, Senator Ted Cruz has an op-ed in the Washington Post on the subject, which starts out with an eloquent and spot-on assessment of what we are trying to protect:

Never before has it been so easy to take an idea and turn it into a business. With a simple Internet connection, some ingenuity and a lot of hard work, anyone today can create a new service or app or start selling products nationwide.

In the past, such a person would have to know the right people and be able to raise substantial start-up capital to get a brick-and-mortar store running. Not anymore. The Internet is the great equalizer when it comes to jobs and opportunity. We should make a commitment, right now, to keep it that way.

This is absolutely what this is about. The ability for any person -- a teenager in Des Moines, a grandmother in Brazil, or a shop owner in Norway -- to get online and start writing, selling, streaming, performing, and transacting -- with pretty much anyone in the world (outside of China). This is the magic of the internet.  Right there. By essentially a happy accident, we have created the single most open and vibrant marketplace in the history of the world.  The most democratizing, power-generating, market-making thing ever.  And the core reason behind this: on the internet you don't have to ask anyone's permission to get started. And that "anyone" is not just the government -- as we're used to asking the government for permission for lots of things, like drivers licenses, business licenses, etc. In fact, more importantly -- "anyone" means the carriers whose lines you need to cross to reach an audience on the internet.  A blogger doesn't have to ask Comcast's or Verizon's permission to reach its subscribers.  Neither does a small merchant, or an indie musician or filmmaker. Contrast that with how cable TV works -- in order to reach an audience, you need to cut a deal with a channel, who in turn needs to cut a deal with a carrier, before you can reach anyone.  It is completely out of the realm of possibility for me to create my own TV station in the Cable model.  In the Internet model, I can do that in 5 minutes without asking anyone's permission. What we don't want is an internet that works like Cable TV. So I agree with Ted Cruz -- his description of the internet is exactly the one I believe in and want to fight for. But where I think he and many others miss the point is that Internet Freedom is not just about freedom from government intervention, it's freedom from powerful gatekeepers, who would prefer to make the internet look like Cable TV, controlling and restricting the mega marketplace we've been so lucky to take part in. Let's not let that happen. p.s., I would encourage any conservatives pondering this issue to read James J. Heaney's powerful and in-depth case for "Why Free Marketeers Want to Regulate the Internet"

#internet-freedom#miscellaneous#policy