Loomio
Sun 30 Jul 2017

Ecosystem technical architecture

LF
Lynn Foster Public Seen by 405

This thread is to discuss what we want for initial technical architecture to get started. This can include (but is not limited to)
* Vocabularies needed, and question of mapping them
* Protocols needed
* Technology(s) for messaging between apps
* How to share data between apps
* Centralization vs de-centralization, how do we think about it
* Short term vs long term considerations
* Conceptual framework facilitating discussion of and agreement on the above topics

[All: Please feel free to edit this introduction!]

BH

Bob Haugen Tue 8 Aug 2017

Here's a long and complex conversation about this topic from the Telegram Community Software Interoperability group. I'm posting this is several parts, and will signal when done.

Mat Slats:

@lynn the Solsearch API is doing something I don't think any other platform is doing becuase it is indexing content from any other platform. Platforms by nature tend to be self referential, rather than interoperable. So the whole thrust of my work is to create an interoperability layer, function by function, so that diverse platforms can interoperate using commons services with simple platform independent APIs. I know very few instances where platforms are talking to each other in this way and so solsearch isn't duplicating anything else going on out therE

Bernini:

Hi @matslats sorry if im asking you to repeat for the n times the same concept, where i can find documentation upon solsearch? Do you have a github repo or a wiki?

Lynn Foster:

Mat Slats
@lynn the Solsearch API is doing something I don't think any oth

Yes, I agree completely with what you said, so perhaps I wasn't very clear. My question is then how should we best make offers/wants interoperable with other important functions? How to think about interacting api's across a software ecosystem? How to mesh all of our visions?

Mat Slats:

the api is documented here. https://app.swaggerhub.com/api/matslats/sol-search/1.0.0 I wonder if it deserves a web site of its own. For now Community forge is runing an instance of the solsearch service, but only deployed on a few cforge sites for testing.

@lynn this approach doesn't very well support the more workflow oriented approach that value flows and wezer and support. Solsearch is simply an index of extant offers. So if you want to universally rate those offers, or connect them to inter-group transactions, or move them through workflow states you would have to build all of that into your own platform.
Those things are very hard to support between platforms because there are so many different ways of doing them. What we can do between platforms with very different internal views of the world is only a minimal feature set without things getting very complicated very quickly.

Lynn Foster

Mat Slats
@lynn this approach doesn't very well support the more workflow

Well I'd like to try! And maybe we'll find out that a "minimal feature set" is enough for a lot of what people want to do, I don't think we need a complex set of models. I'm not saying it is an easy task, but I think necessary to move towards supporting an actually running fair economy.

Mat Slats:

Right so my hope is that various platforms will choose to contribute to a simple global index of offers and wants simply for the purpose of helping people to exchange, regardless of platform, currency, or the movement they happen to be engaged with. I would like to see Freecycle stuff indexed there, for example, but they didn't answer my mail, also MAN, also timebanks, and I can dream all the projects listed on http://matslats.net/barter-software-reinventing-the-wheel

Bob Haugen:

@matslats - I think I have not understood your proposal before, possibly because I did not understand where you thought it fit into a software ecosystem.
I assumed it was an API into your existing mutual credit software.
Do you mean it as a possibly-global intent-casting service, where many apps could publish their offers and wants and match them up, and then the matching counter-parties would conduct the conversation about a possible exchange between their "home" apps?
Like, how would this work in the Mutual Aid Networks, where they are using your mutual credit system as well as Wezer, and both systems have offers and wants?
(But I think only Wezer tracks the states of conversations about offers and wants...?)
This also gets back to @fosterlynn 's questions about the intended architecture.

Eric Doriean:

Great and needed conversation everybody. I got into this because @ AnyShare we are soon going to integrate Loomio and laters others via their api. The idea of sso for both AnyShare, Loomio and others would be great, so you could click on a Loomio poll in AnyShare and not have to login again on Loomio. That is if Loomio chose to use the same sso. Something that would be more likely if we a lot of us came together and worked on the same sso solution. Although this would all be great, there's an opportunity to kill a few birds with one stone here. A massive issue on the net is users data as you all know. We at AnyShare have a goal, like I'm sure others do here, for users to have complete ownership of their data. So they can choose who, can or can't use their data and for what. In researching this I came across Solid (https://solid.mit.edu/) In a perfect world a sso system that used Solid or something like it, so that the data could be used by other apps if the user chooses. That way data could be tagged with value flows or other vocabulary like they are using at the Ceptr project could be added to the data. User signs into the apps they want using the sso and then choose what data they want to share. Love to hear peoples thoughts, whilst were talking vision

Mat Slats:

@bob Yes a global intent-casting service. But very simple, there is no matching service provided only the ability to filter results on location, keyword and a few other things.

Bob Haugen

Mat Slats
@bob Yes a global intent-casting service. But very simple, there

Ok, thanks. I might finally get it. Sorry if I should have understood faster, but sometimes I am slow, and there are a lot of moving pieces in this potential software ecosystem.

Bob Haugen:

Eric Doriean
Great and needed conversation everybody. I got into this because

https://anyshare.coop
Welcome @solarpunked
Are you doing anything with Solid or Ceptr? Or just looking?
(I'm just looking at both...;-)

Numa
To me it seems clear that APIs to connect apps to existing platf

@numa_gopacifia I agree that strategy is important at this stage, and I like APIs, but I don't think they are a practical strategy.

If every app publishes their own API, and you are using many apps, you pretty soon run into a combinatorial explosion of APIs that must be somehow integrated.

That's why at least some of us prefer vocabularies and protocols for app interoperability.

But getting agreement on those is hard, too, so we are also interested in vocabulary translation systems like what Tibor of Communector showed us.

And I am always ready to be proved wrong about any of my opinions...
But also, in practical terms, if somebody publishes an API, I can usually derive a vocabulary and maybe protocols from it.
Still, I'd like to see more common agreements on these kinds of things in the commons...

Mat Slats:

Numa an app doesn't publish an api. A rest service publishes an API and an app is a client that implements the API. So to rephrase what you said:
If many REST services publish an API, and an app is using many rest services, then instead of an explosion of APIs, you can have a an explosion of interoperability. Every common function an app does could be done in a standard way and maybe even in the same database, instead of one app, one big api, one backend, many functions one user community, you can have many apps, many small APIs, many backend services giving a choice of function serving a much wider community.

Bob Haugen:

What I mean by an app publishing an API is an app as a service that you can communicate with thru its API.

If you want to define App as Client, that's ok with me, but I want software that can provide services, too.
If each ReST API has its own vocabulary (sets of fields with different meanings, different messages you can use to POST or PUT), then your client app using them all has a combinatorial explosion.
The HTTP(s) protocol helps a lot with a standard set of methods, but over the top of those methods another layer of protocols emerges.

Mat Slats:

My understanding is that the app is on the client side of the API (e.g. the mobile phone) and the service on the server side. The whole point of this architecture (I think) is that the two are separate, independent components conforming to the same spec.

Bob Haugen:

I am ok with using those definitions, if it will help you understand me better.
I understand client and server more easily.

Mat Slats:

So, ignoring solsearch, which is only indexing content from existing platforms. Imagine if all the offers and all the wants from all the apps were stored in the same db. It wouldn't matter what app you were using under what brand with which business model, you would have de facto access to the whole marketplace. You could even switch to another app without losing access to that market. Compare that with say, Pollen or a typical local exchange in which you only have access to Pollen users or members of the local system...

Bob Haugen:

Disregarding the database angle, if everybody used your offers and wants api, that would constitute a common vocabulary for offers and wants and some part of a common protocol.
What would emerge on top of that protocol is the layer 8 protocol for economic interactions, where offers and wants find each other and engage in conversations where they either do or do not exchange resources.
Like in the Mutual Aid Network, where I think their Wezer app handles the economic protocol.
And I think in at least some mutual credit systems, that is all handled informally.

Mat Slats:

Yes, and the APi would probably need to be a bit more accommodative of variation, for example by allowing user-defined fields. But I can't design that myself very well.

Bob Haugen:

Yeah, vocabularies are hard, and common vocabs are harder, and need to be defined collaboratively.
And then redefined collaboratively....

Mat Slats:

you have flexible fields as well so the db can have a plain text field called, say, 'extra', and each app could store the unspecified fields & values in a serialised array. Then any client could render those fields legibly without understanding them. Better still, in a nosql database every item can have different fields and they are still searchable.

Bob Haugen:

Bob Haugen
Like in the Mutual Aid Network, where I think their Wezer app ha

I probably used app differently than you do, again. Wezer performs both client and server functions.

Mat Slats:

Wezer is what I would call a platform that delivers ready-to-render html. rather than the service/app model which passes json back and forth.

Bob Haugen:

I've been using app in the sense of Layer 7 in the Internet protocol stack which is the application layer, not in the sense of a phone app. But I'll change if that confuses people.
Anyway, I think we are getting too deep in the weeds, and I need to cook breakfast.
But I'm glad I finally understand your ads api proposal a bit better...
Not too interested in arguing definitions...

Mat Slats:

hey bob I didn't consider this an aargument, and not the first time between us it has been very necessary to clarify how we use words. It was months before I understood what you meant by vocabulary for example.

Sybille Saint Girons:

About ontology remember to see what is going virtual assembly with Communecter and Simon Louvet
They have defined PAIR (projects actors ressources ideas)

Bob Haugen

Sybille Saint Girons
About ontology remember to see what is going virtual assembly wi

We met with Communector this week, and are meeting with Guillaume Rouyer of Assemblée Virtuelle next week. Haven't met Simon Louvet yet.

Mat Slats
hey bob I didn't consider this an aargument, and not the first t
Yeah. I understand. And most people have no idea what I mean by protocol, either...

(more to come in next comment)

BH

Bob Haugen Tue 8 Aug 2017

This last part of that long comment was garbled, and Loomio apparently can't edit something that long. It should have read:

Me responding to MatSlats:

Mat Slats hey bob I didn't consider this an aargument, and not the first t

Yeah. I understand. And most people have no idea what I mean by protocol, either...

(In other words, as you can see from several parts of these conversations, it takes awhile before we understand what each of us means by the words we use.)

BH

Bob Haugen Tue 8 Aug 2017

Bernini:
diagram1

does this diagram represent the vision you're expressing so far ?

Mat Slats:

yes thanks a lot bernini. You've got the relationships exactly right. I would add brand names to the apps e.g. LETS app, timebanks USA App, MAN app.
I would name a couple of the APIs Acccounting API, P2P adverts API, Authentication / group membership API
And instead of 'backend' I would say 'REST service'

Bernini:

I think it wouldnt be only rest service, because for backend I mean also server logic and db
is it the same for you?
anyway I'd like to propose a slightly different diagram:

diagram2
diagram3

Mat Slats:

For me a REST service includes business logic and data storage - maybe I'm using the term unusually I don't know

Bernini:

ok at least we know its the same :)

Mat Slats:

I don't understand the extra layer you added. And the backends wouldn't necessarily talk to each other. They just need to have a common authentication method - something like. The client passes a token that proves the current user is a member of group X

Bernini:

What you said is how the first diagram works, but I think it would be foundamental in order to create a true ecosystem to provide a unique set of API / graphQL / .. already structured and organized with all the data coming from all the different apps, merged to specific endpoints.

The reason why is if you have 2 endpoints from witch the client need to GET the same list of data (that represent the same items) the frontend should be dumb enough to retrieve the list from a unique endpoint that contains the merged items
if not, the frontend would be really complex and sovrastructured
for this reason we would need the extra layer that should allow the communication among backend

Sybille Saint Girons:

caan you give example of apps and backends please ?

Bernini:

pretty the same of Mat

an app could be a lets app with an exchange market integrated
a backend could be the backend of the LET, where all user balance and transactions are stored
and where there is the business logic to update user balance and infos

Mat Slats:

can you give a more practical example of what the extra layer would do?

Bernini:

USECASE: we have 2 separate backends that offers a list of wants/offers and we have a client app that need to show a unique list of wants/offers
with the diagram 1. the client should GET data from endpoint provided by the backend 1 and make another GET from the endpoint provided by the backend 2, then add logic to normalize data (because normally the json retrieved from different endpoints are formatted in different way, with different information and different methods)
and then display the list of wants/offers
with the diagram 2. the logic to normalize data would be made on the extra layer, in a way that the data that arrived to the rest api are already normalized and merged togheter
in this use case, the client app would make a unique GET to one endpoint and show the list as it arrives

Mat Slats:

In the case of two seperate backends storing offers and wants i have already built an indexing service (SolSearch) which I judged to be easier than syncronisation or making multiple requests

Bernini:
I need to find the time understand how it works :)

anyway that example is possible to replicate with every kind of data
in an ecosystem of apps, replication and normalization of data is one of the first challenge we'll find imho

Mat Slats:

the solsearch doesn't normalise the data, it is a searchable index of the data. It is populated not by crawling but by each database updating each item as it is changed.

Bob Haugen:

One of the things that will happen in the middleware is that different clients will be handling a lot of different data, not just offers and wants, and some of the other data will be related to the offers and wants.

And likewise the different backends will provide different data sets, some about offers and wants, and some about resource production, etc etc.

We just finished a conversation about blockchains that are starting to handle different mixes of "assets", for example, with all of the history of all the assets.

BH

Bob Haugen Tue 8 Aug 2017

Eric Doriean:

Bob Haugen
Are you doing anything with Solid or Ceptr? Or just looking?

With Ceptr eagerly awaiting the alpha of Holochain and getting across the Ceptr grammar. Am seeing grammar like what they use as well as open values as key for interoperability. Api's are nice and we have one, but only part of the solution imo. Ceptr white paper on their grammar is very good. http://ceptr.org/whitepapers/grammatics

Not much in terms of Solid yet. Only recently came across it. The need is there and makes sense to use w3c standards and piggy back off Tim Berners Lee fame.

Bob Haugen:

Thanks for Ceptr paper..

Same basic idea of common vocabularies between Solid and Ceptr (I think - need to study this latest Ceptr paper...)

Grammar is necessary too.

@Bernin1 has been working on sentence structure for apis

Eric Doriean:

We're mostly thinking open values and in particular rea, as short to medium goals. Ceptr still very much a wait and see, but getting across it now

Lynn Foster:

@solarpunked just looking at Anyshare, will study it some more, maybe joining is the best way to do that....?
Yes for Ceptr and Solid, it is a question of readiness I guess

Bob Haugen:
3 people who are pretty deep into Solid in the Value Flows gang (but not me)
Lynn is deeper than me

Lynn Foster:
I'm deep into VF but not into Solid

Bob Haugen:
I'm thinking Pavlik, Kalin, and Brent.

Melvin comes around some, too, and he is way in deep.
We got nobody who's into Ceptr yet.

Eric Doriean:

Yeah, easy to get mixed up with all the acronyms. REA (Resource, Event, Agent) is simple and makes a lot of sense for interoperability and being the basis of circulatory economics. It's one thing an app using it, but need others to be able to get/use the data. Hence why were keen on sso and a way for users to choose how they share their data.

Bob Haugen:

REA is our core, as well.
And was one of the main topics on that blockchain call we just finished.

Mat Slats:

OK yes middle ware is where the business logic happens

Lynn Foster:

@Bernin1 great pictures, really helps the discussion!
Question: what is the communication between the backends on your second picture?

Mat Slats
OK yes middle ware is where the business logic happens

I think of business logic happening on the backend... ? Is middleware the api / vocabulary layers?
(just making sure I get the terminiology)

Lynn Foster:

@matslats where does solsearch sit on either of the diagrams? does it include any user interface? I am thinking it is a backend thing?

(I had misunderstood earlier, I hadn't realized the backends are responsible to write to the index, now I understand more why there is one api dictated by the index. And that makes me understand why you talk about big databases of different sections of information.)

Mat Slats:
Solsearch would be underneath the backends on the diagram. it is a service to platforms (or other services)

BH

Bob Haugen Tue 8 Aug 2017

Pospi:

A very interesting discussion being had here! On using multiple APIs- some seem to be suggesting that this is an appropriate low-friction way forward. @bhaugen is absolutely right about the "combinatorial explosion" that comes with many different (competing or complementary) APIs being implemented. But such an explosion isn't free, and in practise I think we'll find it falls flat. Developers don't often implement apps which talk to multiple APIs because of the considerable effort involved in doing so- connecting to 1 API is enough work, let alone several. If apps connected to multiple services was a useful and appropriate way of achieving interoperability, we'd have already seen it happen. The tech has been there for decades.

I'd also add that creating a translation layer on top of existing APIs that normalises the data for end-user apps only partially alleviates the problem. You might be able to share the normalised data with more apps, but really you're just pushing the problem further down and requiring a complex intermediary be built which must still do all the translation work. Of course this will be unavoidable to some degree, but I think it should be looked on as a backwards-compatibility measure and not a forward-thinking solution. Indexing and normalising with "helper" services like Solsearch is also a great way to bring older apps up to date.

In a pinch, you might be able to get something workable with a single set of APIs/services chosen for each purpose (1 service for signon, 1 for offers/wants, etc), but it's never going to be an ecosystem. Nobody is going to come along and integrate offers/wants services #2, #3 and so on.
In my view the current pragmatic & inclusive solution is common protocols & vocabularies, or systems like the ones referred to earlier which translate between. Once you've defined some grammar for communicating what "user data" looks like (eg. solid), then developer effort at the application level is potentially reduced to adding some extra URLs to integrate additional services. It needs to be that easy for the application developers, or it just won't happen. But this requires the service developers to understand and build to the specs, which is often easier said than done.

The long-term, more tightly integrated solution is to just have a single system / database as @matslats describes. The tradeoff you have to make there is accepting a single global world-view. This is where blockchains & potentially holochains come in- but such solutions are only going to reach widespread adoption if they are designed by communities with extensive amounts of feedback to ensure that they work for all users. Otherwise we're just creating new sets of silos with weak glue between them but doing it on top of new tech.

The nice thing about such projects which integrate code into the network is that developer error and failures become "harder". Which is to say- it's really easy to mess up conforming your data to some protocol in a way that makes some of the interoperability not work; it's less easy to mess up code which talks to other code directly because (if the tooling is right) you can't avoid having the language & compiler in there to error check a lot of things for you.

Here's some kind of attempt to visually describe the landscape as I hope it unfolds.. I'm not as good at pretty pictures as @Bernin1 though :D

diagram4

(I'm leaving blockchains out of the picture for now to avoid overcomplicating it, but interchange between those systems would probably look much the same anyway)

Lynn Foster:

@pospigos thanks for the thoughtful post

Sybille Saint Girons:

+1

Gustav Friis:

@pospigos very good description, thanks I learned quite alot. I agree very much on your silo platform/protcol concerns, which is not pr. default solved by new technologies such as blockchain. Below is a simplified outline of the architecture we are currently working on - illustrating a mobile application interacting with a smart-contract on the public Ethereum blockchain - mediated through a belt of federated relay servers.

I agree that designs like this, at least in the short term, are likely to suffer from a 'silo syndrom' as well - where other silos are likely to be solutions based on other underlying consensus protocols than Etheruem, say fx Faircoin or Bitcoin RSK.

For the longer future I think this syndrom will become less of a problem due to interoperability between different BFT blockchain protocols - forexample enabled through sidechains based on two-way pegs.

diagram5

Bernini:

:heart::heart: both for pospi and gustav solutions...:smile::+1:

Bob Haugen:

This is getting to be an interesting discussion..

Lynn Foster:

:+1:

BH

Bob Haugen Tue 8 Aug 2017

Pospi:

@Gfriis for your reading pleasure here are some other interoperability efforts underway, with Gavin Wood's Polkadot probably being the most ambitious:

(mentions xrelay, which people at Consensys tell me is the generic version of BTCRelay currently in progress. Can't find much other info on it though)

Pospi:

the team tells me xRelay is still in "stealth mode", so I guess you'll have to wait to hear more on that front!

Gustav Friis:

Cool, @pospigos most appreciated. I am slightly familar (will proberly never fully understand them hehe) with Cosmos and Polkadot already - and have met a few of their team members, definetley capable projects! xRelay is news to me too, will ask a bit around for info on that one!

For Cosmos, they rely heavily on Tendermint, with strong similarity to Hydrachain. That is, so to speak, a factory of creating customized blockchains (permissioned/permisionless) with different consensus.

Solutions that in the future will allow for pluggable consensus, which is also the long term vision of the Enterprise Etheruem Alliance, so that the public chain can be used.

Both are already a bit old, and the newer evolvements here are espcially CITA from cryptape, Hangzhou I recently visited. Cryptape also help to develop the Casper PoS Ethreum update.

https://github.com/cryptape/cita

Another one is from Taiwan, and the mysterious AMIS group, who recently released this Istanbul Byzantine Fault Tolerant EIP:

https://github.com/ethereum/EIPs/issues/650

I also recommend to take a look at OmiseGO, this whitepaper is mainly written by Joseph Poon, one of the creators of the lightning network protocol.

https://cdn.omise.co/omg/whitepaper.pdf

Omise is a very powerful commercial entity, they have sponsored forexample the development (in parts) of Raiden Network and Cosmos Network + various other Ethereum initiatives through Dev Grants.
GitHub
cryptape/cita
cita - A fast and scalable blockchain for production.

Pospi:

thanks heaps! I will take a look at those links for sure. Have heard of Omise before but not the others

Eric Doriean:

Thanks @pospigos and @Gfriis

I never thought about translation for grammar. Very cool. Feeling a good step 1 is start with a grammar and to me value flows and REA makes a lot of sense. Ultimately we'll see a lotv of different grammars come and possibly go, so do see how translation would be needed.

Know were still in vision/discussion mode, but at some stage would be good to start thinking on steps, that app devs like us at AnyShare would need to implement

BH

Bob Haugen Tue 8 Aug 2017

Bob Haugen:

I think it is necessary to think both strategically and tactically.

Strategically: something like the diagram @pospigos just showed us. And people start to build software components that were designed from the start to be part of an ecosystem (which is actually starting to happen: the Kamasi project is like that, and I think also @matslats Ads API, and likewise the projects working on SSO),

Tactically: start to integrate different existing software that is used in a community, as FairCoop wants to do with OCP, ChipChap, and FairMarket, and the Mutual Aid Network with Hamlets (mutual credit) and Wezer. That will probably be done more opportunistically at first: just make it work. And then when the ecosystem practices get more developed, maybe change over to the ecosystem architecture, or maybe not.

The SSO component might also do some evolving. All the components will evolve as we get better at being an ecosystem.

I think maybe the Kamasi project might be the first app that was designed to fit into something like @pospigos 's diagram...

Lynn Foster:

Eric Doriean
Thanks @pospigos and @Gfriis I never thought about translation

Know were still in vision/discussion mode, but at some stage would be good to start thinking on steps, that app devs like us at AnyShare would need to implement

@solarpunked do you want to also join in on the loomio Open App Ecosystem conversation? The focus there tends to be more action oriented - https://www.loomio.org/g/exAKrBUp/open-app-ecosystem

And as always, there is nothing like a good use case on the ground to move things along. Mutual Aid Network has one to make existing software interoperable, but that will involve people doing api work on other people's software, and will also need a single signon piece. FairCoop at some point will also want to make several things interoperable.

Is there some group using AnyShare that could fit in somewhere? Or some other way you have in mind to explore how to plug Anyshare in to the "tactical" steps, either here or in loomio?

Bob Haugen:

@pospigos 's diagram describes a stack-based architecture, if I understand correctly, where "client" apps would use a protocol adapter to communicate with what to them are "server" apps (although in some cases the "server" apps would communicate with other "server" apps, too).

(I am using "client" and "server" in quotes because each of the apps could potentially play either role.)

Pospi:

Check!

Bob Haugen:

I am very interested in working out the second situation, where open apps communicate with each other by switching roles in conversational (message-passing) conversations

A Conversation for Action protocol is like that:
https://www.valueflo.ws/introduction/cfa.html

In the state machine diagram on that page, A and B alternate being client and server.

Pospi:

Same! Do we build "push" or "pull" services for this? Push seems preferable, but pull seems more pragmatic, possibly?

Bob Haugen:

Pospi
Same! Do we build "push" or "pull" services for this? Push seems

Good question!
How would pull work in that state machine diagram?

Pospi:

I guess the service reading would just poll the other? So less efficient in terms of resource usage..

Bob Haugen:

How would the conversation get started? By responding to an intent, maybe?

Pospi:

Push seems ideal, for immediacy too. But it requires producing services to know about all their consuming ones. Maybe theres a way to be clever about it using something like PubSubHubbub (or whatever is in common use these days)
Hm you might need to put that into concrete examples before i'm with you. Off to dinner, will pick this up later (:

Bob Haugen:

Couldn't the initiating app A POST a message to the initially responding app B via some designated URL of B?
I'll write up a concrete example.

Eric Doriean:

Lynn Foster
Know were still in vision/discussion mode, but at some stage wo

Well said and thought out. Don't have a case study that would be the right fit just yet. But I will work on that

Ps. Have joined the Loomio group and will get active in that during the week

Bob Haugen:

Concrete example: Lynn is A, Pospi is B, in CfA.

Lynn posts a request for help in debugging her graphql mutation, on her personal app. Also gets posted on @matslats solsearch ads service.

Pospi sees it on some link to Lynn's request on the solsearch ads service, and responds to Lynn's personal app using the link in the solsearch ad.

After that, the conversation continues between Lynn and Pospi's personal apps.

Pospi's response says that he offers to help debug this problem, but that he can't reproduce the bug.

Lynn explains more about how to reproduce the bug.

Pospi says, ok, I finally get it. Then he pushes a change to the graphql api code that he thinks will solve the problem, and notifies Lynn.

(These are messages in the Negotiation stage of the CfA state machine.)

Lynn tries the proposed solution and says to Pospi that it still doesn't work. They go around a bit in messages about how to make it work, and finally arrive at a working solution.

(Those messages are in the Evaluation stage of the CfA state machine.)

@pospigos - Clear enuf?

Bob Haugen:

Lynn and I talked about this on our walk. A couple of additions:

  • An organizational app might have many such conversations going on at the same time, even about the same intent. Personal apps might have them, too. So each conversation will need to have its own unique identity, and each of the apps involved will want to keep track of the state of the conversations and the message history.
  • One way I have seen that done is via a traveling document, where each message gets appended to the traveling document, and then each participant signs both their message and the traveling document when they post their next message.
  • Also, some kind of maybe paxos-like consensus mechanism, or maybe even simpler, will be required around each message send and receive, so the sender knows the message was received and understood, and if they think it has not been received and repost, that the message appears once and only once in the conversation.

Ok, that was 3 additions...

We should harvest this chat into some ongoing document...
document shd be shareable both here in the open app loomio group.

(This is that document.)

BH

Bob Haugen Tue 8 Aug 2017

Bob Haugen:
P.S. Lynn and I sketched out an even-simpler 2-agent post exactly once setup awhile ago.

2-agent transaction protocol:
https://docs.google.com/document/d/1g8NUOziTtFJIzVc2uJTcwQycC_cuTKTbErkCXW7JegM/edit?usp=sharing

BH

Bob Haugen Tue 8 Aug 2017

...and that was the end of the harvested conversation. Hope it made sense in this context, and that you found at least some of it to be interesting and possibly useful.

BH

Bob Haugen Wed 16 Aug 2017

Short version of that long and rambling conversation:

Three approaches to technical architecture have been discussed and tried out among participants in OAE-related projects:
1. Stack-based: many components combined into the same stack, all working together, and presenting themselves to users as a whole system.
2. API-based: each component publishes an API which other components can use to communicate with it.
3. Vocabulary-message-based: components share a common vocabulary and send messages to each other using that vocabulary.

[edit] approaches 2 or 3 could present themselves to users as a single system, or not.

Those approaches are not mutually exclusive: can be mixed and matched.

Here is a Loomio conversation from three years ago about vocabularies:
https://www.loomio.org/d/5WOvZfEq/linked-open-data

BH

Bob Haugen Wed 16 Aug 2017

@asimong how do you think those technical architecture approaches fit with organizational approaches?

To the extent that organizational forms have been discussed among OAE participants, we usually assumed that components would be developed independently by different people in different organizations, who would collaborate and make their components work together as it suited them. Pretty light weight.

A stack approach would require a different form of organization, I think, which could still collaborate with other orgs with different forms for API or message-based components.

LF

Lynn Foster Wed 16 Aug 2017

A stack approach would require a different form of organization, I think, which could still collaborate with other orgs with different forms for API or message-based components.

Interesting question. I tend to think that we need to support both a loosely coupled stack approach (such as the one that popped up just now in Bob's last comment), and a message-based approach.

I'm not sure the technical architecture is that different between the loose stack and the message structures. But that might be because I'm not super technical in the architecture department.

I do think this discussion is really important to have though, the sooner the better. And I'll be very interested in how we end up thinking about organization in relation to those decisions.

BH

Bob Haugen Wed 16 Aug 2017

We have a lot more experience with stack or API architectures than message-based. While Kamasi uses the VF vocabulary, it is not message based, it uses the API in a client-server mode where the client is always the client, and the server is always the server. And at this stage of Kamasi, the client is always talking to the same server (OCP), although that may change over time. I know @ivan116 wants it to be able work on any stack.

I think in a message-based architecture, the components would be much more loosely connected, might be separately hosted on different domains, and if the components were carrying on conversations, they would alternate between "client" and "server" roles, as I sketched out in this github issue.

@matslats Ads API, described in that long set of messages above, would work almost like a message-passing setup, in that separately-hosted components would POST and GET offers and needs via his API.

I hope I am communicating clearly here. Please let me know if I'm not.

SG

Simon Grant Wed 16 Aug 2017

Huge appreciation to Bob @bobhaugen for giving us the detail and summarising succinctly!

Some disclaimers are due from me... I know next to nothing on the technical architecture front. I have some amateur coding experience only. My background, and my approach, is very human-centred. I've been fairly deeply into HCI, soft systems, socio-technical, semantics, ontology, that kind of thing.

First, on the API/vocab issue. I think I follow you, Bob. APIs, yes, but if APIs use elements with different, incompatible semantics, there's going to be a quite possibly intractable translation challenge. So, maybe, APIs based on common vocabularies and semantics.

Now to the human interface point of view: whether a set of apps presents to the user as one unified set or a set of separate apps linked behind the scenes, I don't think one can answer in the abstract. What does seem to matter, a lot, is the apparent complexity at any one point in the interaction. So: structure any interaction with clearly delineated decision points, and present all the information relevant to any decision within easy reach of the place where the decision is made. That's how to support humans making decisions, in my mind at least.

While there are purely technical matters that are best hidden (for the above reason) from the end user, I'm a firm believer that the concepts that are used to represent things in the ontology of relevance to end users should be comprehensible to those end users, at least in outline. That way, it will be much easier to build a set of services that actually make sense to users.

Or take it from a different angle... the technical architecture should be built on top of a conceptual architecture that models the common understanding of common users ("commoners"). Added to that, sure, we may need further elements of the conceptual framework that are relevant to developers, and not to users. But that's OK. Though it's liable to change as the technology changes in the future.

And, in my view, the kind of conceptual model diagrams that are used in the above discussion, and the kind that are used by REA people in explaining REA, sit in this good space of models that are intended for common agreement — some, OK, are more technically oriented, but anyway.

I'd really be interested to know if there is disagreement on any of the above issues. What's plain to me might be obscure to others, or just plain mistaken!

There are a couple of architectural memes that I have picked up over recent years. One is "small pieces loosely joined". Another is the general concept of linked data. Those two are not enough, by any means, and I wonder if there are other common memes — more commonly understood memes — that we might use as principles for the technical architecture underlying the ecosystem?

At this point, I'd like to end simply by saying, that's where I'm coming from in suggesting that we look at a conceptual model that can be shared with users first, and ensure that we build the more technical conceptual model on top of that. Having said that, I fully recognise that I am talking in too abstract a way to be really useful, and would value engaging in the actual job of building such models. With the aim of providing a good foundation for a technical architecture.

Like I've been saying elsewhere, the nature of the task of building common conceptual models means that it is almost impossible for one person alone to do it successfully. The coordinators of a model need to be listening, carefully, to how end users think; how they conceive of, how they construe, the tasks that we are trying to facilitate. If different users themselves think incompatibly, then I'd say it is the time for a campaign of education, thought of in a constructivist way, not in a transmissive way, of course. :smiley:

BH

Bob Haugen Wed 16 Aug 2017

@asimong thanks for very clear comments and questions.

if APIs use elements with different, incompatible semantics, there's going to be a quite possibly intractable translation challenge. So, maybe, APIs based on common vocabularies and semantics.

Yes and yes. That was the reason that the original OAE very loosely decided that a common vocabulary was critical.

I'm a firm believer that the concepts that are used to represent things in the ontology of relevance to end users should be comprehensible to those end users, at least in outline. That way, it will be much easier to build a set of services that actually make sense to users.

We agree, but it's a challenge to do.

the nature of the task of building common conceptual models means that it is almost impossible for one person alone to do it successfully. The coordinators of a model need to be listening, carefully, to how end users think; how they conceive of, how they construe, the tasks that we are trying to facilitate.

I don't know if you've followed on we have worked in Value Flows, and I think many of the OAE participants do things in very similar ways.

We always develop software for live groups that will collaborate in the process: talk to us about requirements, how they do things, and test the system as we develop it incrementally.

We work in a spirals, between deep dives into particular communities, and spiraling back to Value Flows to summarize what we have learned and improve the vocabulary.

Over several years, some of us have developed concepts and components that we re-use in the next engagement. But we never develop software with no live users in the mix.

Now, sometimes this all works better than others, and we still have lots of communication problems and misfits between the software and the users, but it does get better. (I think.) Sometimes the set of components and concepts that we have developed over time does not work so well for the next group, for example. One of the goals of OAE was to develop a more flexible set of components that could be assembled by a community themselves.

BH

Bob Haugen Wed 16 Aug 2017

The Kamasi project from @ivan116 is an interesting hybrid. It has a stack, and an API, and the components in the stack communicate using the VF vocabulary. And organizationally, there's a small group of devs who work very closely with each other. If anybody is interested, either I or Lynn or Ivan could go into more detail about how it works and why it works that way.

BH

Bob Haugen Wed 16 Aug 2017

There are a couple of architectural memes that I have picked up over recent years. One is "small pieces loosely joined". Another is the general concept of linked data. Those two are not enough, by any means, and I wonder if there are other common memes — more commonly understood memes — that we might use as principles for the technical architecture underlying the ecosystem?

Those two principles were shared by the original OAE gang, along with some particular concepts about linked data vocabularies (REA, Semantic Web formats, etc). This new revival of OAE brings other vocabularies (ontologies) into the mix: for example, Communecter pivot vocab, the PAIR vocab from Virtual Assembly, and the Food Data Consortium vocab. Thus the need to translate between vocabularies.

SG

Simon Grant Wed 16 Aug 2017

Great to hear that we've been on the same page. Yes, I agree with your statement of need.

Can we (and if so, how can we) map out the different vocabularies that we want to translate between? I would find it useful if, in every case, we can get in close touch with at least one of the authors of the vocab. That enables deeper one-to-one dialogue and mutual human understanding; a good precursor to writing translations. I guess it's similar to having at least one native speaker of each language on hand when doing natural language translation.

BH

Bob Haugen Wed 16 Aug 2017

Can we (and if so, how can we) map out the different vocabularies that we want to translate between?

Yes, we are starting next week to map between the VF vocab and the Communecter pivot vocab. We are in contact with people from Virtual Assembly, and also the Data Food Consortium.

For natural language translation, the Fair Coop community translated all of their OCP software using Transifex. We're looking into how to translate Kamasi. I don't have personal experience in doing so; one of the Fair Coop devs led the process so far.

BH

Bob Haugen Wed 16 Aug 2017

P.S. maybe we need to add "translatable" to the Open App requirements somewhere? Like, Fair Coop was able to translate the OCP user interface from English to Spanish and allow each user to select their preferred language (of those two choices) because the base OCP software offered features for translating the UI. Otherwise, it would have been really hard.

GC

Greg Cassel Wed 16 Aug 2017

P.S. maybe we need to add "translatable" to the Open App requirements somewhere?

I think most concepts and statements are (literally) translateable. Do you mean something like "are compatible with one or more specific translation tools"?

SG

Simon Grant Wed 16 Aug 2017

Speaking for myself, @gregorycassel, I don't think "translatable" is related to translation tools. I guess some things, particularly technical things, can be translated with few issues. Personally, I've had much more experience of things that can be surprisingly tricky to translate. My experience is based mainly on English and Italian, with some awareness of French and a little Finnish. So I don't really understand you, Greg, when you say

"most concepts are (literally) translatable"

Can you give examples of the range of concepts that are translatable in this way, and perhaps any that you think may not be so easy?

GC

Greg Cassel Wed 16 Aug 2017

Dependcs on how we define "translate". I think that few if any concepts, outside of extremely simple physical concepts, can be "perfectly" translated! I see translation as an art which tries to transport complex concepts into a new format.

With that in mind, I think that there are few concepts in most languages which can't be translated into other languages... however, "directly" translating one word into a word in a different language often is impossible.

BTW here's my definition of "concept" (in MOT):

A concept is an idea which relates elements.

That definition is unusual and certainly not well-known. It was however the basis of my statement which you quoted. (I think that relationships can usually be translated.)

Does that help to clarify somewhat?

BH

Bob Haugen Wed 16 Aug 2017

What I mean by translatable is that everything in the user interface can be translated into different languages, and any given user can select their language of choice. See attached screenshots.

BH

Bob Haugen Wed 16 Aug 2017

the other screenshot in Spanish
(apparently Joining Process has not been translated)

GC

Greg Cassel Wed 16 Aug 2017

What I mean by translatable is that everything in the user interface can be translated into different languages

Okay, but "can be translated" by what? I guess you'd like to identify some specific translation tool or standard which data (and user interfaces) should be compatible with?

(For example, we could specify that data collected by OAE tools should be effectively translatable by Google Translate. Although I'd probably recommend some open source translation standard instead.)

I'm not sure how else "translateable" could be evaluated. I guess it'd be possible to create specifications for data entry formats which would make entered data easily translatable into most if not all languages-- but, I also guess that that'd be a huge R&D project.

BH

Bob Haugen Wed 16 Aug 2017

The translation from English to Spanish was done by the Fair Coop community, which includes several excellent translators. They all collaborated to do the work using https://www.transifex.com/ which is not open source as far as I know, but is free for open source projects.

OCP uses the Django framework, which offers features for translation:
https://docs.djangoproject.com/en/1.11/topics/i18n/

Lynn and I did the technical translation setup in Django, and one of the Fair Coop devs did all the setup in Transifex.

I hope that makes it less mysterious.

The user-entered data is not yet translated, but one of the Fair Coop devs wants to set up the framework for that. Fair Coop is a multi-lingual organization, so more languages will follow. They are also setting up facilities for refugees, which means, even more languages.

BH

Bob Haugen Wed 16 Aug 2017

So what I mean by "translatable" is having the kind of setup that Lynn and I did in Django, and possibly also the additional features for translating user-entered content.

That is a lot to ask for small projects, and I hesitate to require it. But increasingly, technical frameworks are providing facilities for translation, so it might be doable in many cases.

SG

Simon Grant Thu 17 Aug 2017

Thank you, @gregorycassel I'm a lot clearer about your position now.

My point, and I guess we largely agree, may be that it can be very inelegant, and may feel quite unnatural, to translate some concepts directly. The art may need further study and elaboration?

@bobhaugen -- does your idea of translatable then include the idea that terms are chosen so that they are not peculiar to one language? I'm guessing that maybe that's not your prime concern...

BH

Bob Haugen Thu 17 Aug 2017

does your idea of translatable then include the idea that terms are chosen so that they are not peculiar to one language?

Has not, so far, but that's an interesting and maybe good idea. Do you think there might be a tradeoff between communicating clearly to the people who understand one language, and maybe adopting some term that might be more understandable as-is to people who speak other languages?

SG

Simon Grant Thu 17 Aug 2017

I can imagine two approaches.
1. Case by case, check with native speaker that terms translate well enough.
2. Go for an explicit conceptual model that doesn't rely on any particular natural language.

Caveat: I have no practical experience of this. I only imagine these approaches... :smiley:

BH

Bob Haugen Thu 17 Aug 2017

Go for an explicit conceptual model that doesn't rely on any particular natural language.

Somebody proposed for Value Flows that all the concepts be numbered instead of named and then named by user communities. I think one of the @pedia projects does that - but it's not dbedia, I just checked them. They seem to have English language names with lots of alternate labels for different human languages. E.g. http://mappings.dbpedia.org/server/ontology/classes/Agent

Problem is, it every concept is identified only by a number, it gets really hard for anybody to work with. I think those problems could be helped with a good user interface, but then you need a user interface other than plain text, which imposes another technical requirement and more work for a small dev team.

So if I understand correctly, we will do like dbpedia.

GC

Greg Cassel Thu 17 Aug 2017

Problem is, it every concept is identified only by a number, it gets really hard for anybody to work with.

Concepts are easier to remember by using words instead of numbers. I think that images/icons are even quicker and easier to remember than words. (Which is IMO why lots of national and international signs are based on visual iconography.)

I'm not suggesting we return to hieroglyphics instead of an alphabet (lol). However, I think it'd be easy to argue that a specific set of fundamental terms could be primarily identified by icons instead of words or numbers.

That would probably require a fairly small set of fundamental terms, using icons which are each effectively clear, distinctive and (at least somewhat) meaningful in their appearance.

Unfortunately I've been out of art school for years, so I don't have time to focus on that train of thought right now. One of my main ultimate goals though is to help develop more expressive and flexible visual languages, using the best lessons of historic iconography and logography.

BTW everyone, this is a technical architecture comment but it's quite specific and tangential. (Not trying to disrupt the thread.)

BH

Bob Haugen Thu 17 Aug 2017

However, see also visual language experiments .

As that document warns, that is not the visual language (in terms of icons etc), just how the concepts fit together. My goal there, eventually, is to have a graphical editor that will allow different communities to compose their own systems from building blocks. The mythical software legos, which people have talked about for many years, but have seldom accomplished. I think it is doable. But might take more and longer than me...

D

Draft Fri 18 Aug 2017

Is there any shared document to follow the advancement of the work here ? :D

BH

Bob Haugen Fri 18 Aug 2017

Not yet, as far as I know. A lot of discussions, maybe some agreement, maybe not. We might learn a lot more next week when we start on some actual collaborative work.

BH

Bob Haugen Fri 18 Aug 2017

Replying to myself: the closest things I know of were the early docs published by Enspiral. A lot has happened since then.