Loomio
Wed 6 Aug 2014 3:25AM

"Managed" trust

ST Simon Tegg Public Seen by 158

I want to get feedback on the "managed trust" concept we've been talking about.

I've been assuming that:
* post-snowden people are more concerned about data privacy.
* people are also concerned about large providers with access to all the personal data they host.
* A partial but workable solution to this problem is for "managed trust" groups to host their own data
* "managed" trust groups are groups with collective identities where people have close working relationships (like Enspiral).
* If you're hosting your own data, you might as well share that data between different apps (starting with login, profile, and group data etc) and also avoid tiresome admin work like maintaining manually sync'd data across several apps.

Groups like this would be our main target market and form the nodes in the Open-App network.

Offline and her comments in the comms doc, Alanna made a really good point that "maximised trust" groups have lots of efficiency gain. Enspiral managed to avoid a lot of bureaucracy by...trusting each other.

This sets the Open-App apart from other decentralised web projects that assume "zero trust" and need lots of strong cryptography and anonymity and other technical hurdles to make them work.

What do people think about this?

ST

Simon Tegg Wed 6 Aug 2014 7:36PM

@carolinesmalley:
as @ahdinosaur said trust values are dream features and I personally think binaries (trust, no-trust) would be an easier place to start.

The idea of the "managed trust" concept is to put people and groups at the centre, so people would directly set their personal trust of other entities. I wouldn't "trust" an algorithm to decide how much I trust another person :). You can do interesting algorithm things once you have a layer of trust relationships in place though.

JD

Josef Davies-Coates Wed 6 Aug 2014 8:22PM

I instinctively like the idea of 'managed trust' (or whatever you want to call it) more than the 'zero trust' crypto stuff.

@tav has been talking about using 'trust maps' for years and once got this nice demo up http://www.trustmap.org/ :) - pretty much exactly what you describe above @ahdinosaur :) i.e. you can say who you trust and in what context you trust them

TT

Theodore Taptiklis Thu 7 Aug 2014 1:37AM

Ok. Problem for me here is 'trust' is a Big Word...an abstract notion that means different things to different people.

I'm thinking a lot at present about the notion of group...and the difference between a self-organising, creative, generative, organic group like Enspiral and other kinds of groups whose compatibility are accidents of time, space or task.

So in place of Simon's 'identity' as the basic building block I'm wondering about 'relationship'. The qualities of groups and relationships seem to me to be more prospective than notions of test and identity. For me this is because trust and identity belong to a world in which we are oriented towards individualism, rather than a future world in which we become oriented towards collaboration and participation.

AI

Alanna Irving Thu 7 Aug 2014 2:30AM

I'm having repshare flashbacks :p

I think "trusting" might be a good term. "Managed" makes me assume the system will be doing the managing, when what I think you are talking about is the system taking advantage of the efficiencies of trust relationships human naturally form. But they are not always "high-trust" relationships, they just depend on trust to an appropriate level as judged by the people. It also differentiates it usefully from "zero-trust" systems, making it a unique proposition in the decentralisation space.

ST

Simon Tegg Thu 7 Aug 2014 8:31AM

Yeah, right now I'm more interested in the overall concept of nodes as a group of people with a focal point for their higher trust relationships than in the specifics of a web-of-trust implementation.

CS

Caroline Smalley Thu 7 Aug 2014 5:38PM

Thanks @simontegg ..on reading all the comments, I see I completely missed the point! Lesson learned. Trust me next time? Actions.. Actioned Trust? ...or simply Actual Trust ratings based on actions / permissions for what people can/can't do. 'Actual' because we get to control it. To a certain extent, Facebook already do this. Please don't throw eggs when I say this.. I totally agree it's a nightmare to manage/'control'. Indeed, the lesser the hands on management, the better, hence I agree with @alanna and @joshuavial that 'managed' = an off-putting term.

Needs to be simple to implement.. thinking on/off vs levels. Could have basic and advanced levels for more complex scenarios. Need to make 'setting' the control station an 'as you go' process.

Couple more thoughts.. though maybe an obvious 'to do', but would help if you could create groups of 'friends' such that on/off permissions become a less arduous task, and perhaps could be complemented by an automated back-ups whereby if a program detects one of your trusted connections may have compromised your good faith, is brought to your attention. There's a name for this.. ?

@ahdinosaur never trust me to get a time right?! sorry to have missed the hangout..

DA

Devon Auerswald Fri 17 Oct 2014 8:39AM

not sure just how appropriate this to this specific project is but some trust is far more vital than people seem to understand (not here specifically, but in general).

Consider the following possibility for example:

1) Blocking google from tracking any and all data from search engine results pages on a 0 trust basis becomes a thing, everyones starts doing it, being invisible/anonymous becomes really popular

2) People notice search results become worse and worse as google can no longer obtain high-confidence data as quickly as they did when most volume returned usable tracking data.

3) Meanwhile the spammers working to manipulate rankings by allowing google to track their "intended to manipulate rankings" data (trust me this is an all day every day, in every niche occurrence) become a larger ratio of data google is collecting and using to rank websites. Spam begins to float to the top like its 1999

4) Because people generally don't understand or care that google is just about completely reliant on their the tracking an analysis of user data, they're probably just going to assume Google got worse because of X Y or Z unrelated reason.

Whether the user notices or not, they begin to search less, find less, accomplish less and lets assume for examples sake, 300million people lose 1 minute of productivity a day because of this beating google is taking in part due to the mistrust driven by government corruption, and mostly because most people are not data scientists, nobody realizes - yet - that the 1 minute 300million people lose every day adds up to 570 years of time, every day.

That possibility scares the crap out of me whenever the privacy issues comes up because even many tech people don't catch it right away. I figured this is something worth bringing up. Trust is important. 0-trust is a poison apple in some circumstances.

Whether this example can be applied specifically to this topic or not, im not really sure - I know it will likely be important to consider whenever addressing the issue of user privacy and tracking.

For anyone scratching their head about why google would get worse - they track your mouse, clicks, search revisions, your last-destination post-search, the order of sites you visit, the first site you visit, your last search, your next search. All of that is tracked, it impacts rankings most likely in a more meaningful way than typically known SEO tactics like link building as it is far more difficult to fake and a lot more intimate with googles core goals. UX