How might society impose a duty of care on providers of social networks?

… continuing the thinking from Notes on … Social networks - analogy with alcohol

There is already a great deal of social network analysis in the public domain. It should be possible to pick out some key indicators of when Social Networks Go Bad (tm).

Perhaps [many more details and thought needed] we could approach the social network problem as follows:

  • social networks (of more than X people) to be licensed [yes, but in which country?].
  • the provider to demonstrate the resources and commitment to provide basic network services, such as moderation, complaints procedures, counselling, etc, that can scale for each social network they provide.
  • social networks have a maximum size, according to the type and connectivity of network. Providers can spin up a new network if there is demand, but it remains disconnected from the previous instance.
  • social network providers (and networks) are audited.
  • people, or even regions, exhibiting certain negative behaviours can be identified and calmed down, networks retuned, whatever, but not allowed to run riot.
  • negative societal consequences are tied back to the source social network and a hard conversation is had as to whether the provider is being lax in their duties. There is no ‘we are just the platform, it is up to the users what they do’.

Riffing on consequences

  • It may be that only the larger companies can afford the overheads of providing adequately maintained social networks.
  • It may be that if social networks are capped in size, the better ones have waiting lists. Perhaps they compete on community features?
  • Waiting list? Perhaps a social network worth paying to join?
  • Perhaps there is a hierarchy of earned promotion up through a sequence of linked social networks, based on good behaviour?
  • Perhaps there is a transferable behaviour score, akin to credit score, that can help/hinder your application to join a new network?
  • This causes problems for new internet startups relying on network effects.
  • How would a limited network size actually work? Makes no sense for many use cases.
  • If a provider cannot assure regulators that they can scale up a safely moderated community, then it doesn’t happen.
  • If your sub group is just no fun, can you switch groups?

But it wouldn’t work / doesn’t make sense

  • For arbitrary messaging networks, it would be like saying you can’t call that person because they are on a different phone system.
  • For online commenting on publications, could you have comments from Group A folk and a separate set of comments from Group B folk, but visible to all?
  • Would there simply be different sets of tweets in each Twitter sub-group? Would there be an overview across all groups? Who would have access to it?