You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed on the WG call on 18-Dec. We would like to avoid criteria that extensively promotes the unique features of a single did method.
I would like propose that criteria should have at least 3 did methods with implementations in production that support it before being admitted into the list.
Therefore each criteria should clearly point to 3 existing did methods that implement it before it is approved. This should counter balance some of the "did method bias" that inevitably each of us DID Method Designers has, and promote criteria that can be broadly applicable to a few did methods.
Support for a criteria, can be a boolean "complies" or "doesn't comply" for a given did method. But in other cases can also be quantifiable value, example "did method meets 60% of this X criteria" OR "this DID method has been benchmarked and tested at 100 txn/sec and 10mil users".
The text was updated successfully, but these errors were encountered:
My sense is that all the criteria can be met by at least 3 methods. Obviously, there may be a handful that don't. @ottomorac , do you have any specific criteria in mind that you suspect won't be fulfilled by at least 3?
My sense is that all the criteria can be met by at least 3 methods.
I can foresee perhaps that if the criteria was support for specific cryptographic curves or suites, perhaps that might be where only a handful of DID methods support a novel mechanism (but two others won't). Perhaps @ottomorac meant criteria like that?
As discussed on the WG call on 18-Dec. We would like to avoid criteria that extensively promotes the unique features of a single did method.
I would like propose that criteria should have at least 3 did methods with implementations in production that support it before being admitted into the list.
Therefore each criteria should clearly point to 3 existing did methods that implement it before it is approved. This should counter balance some of the "did method bias" that inevitably each of us DID Method Designers has, and promote criteria that can be broadly applicable to a few did methods.
Support for a criteria, can be a boolean "complies" or "doesn't comply" for a given did method. But in other cases can also be quantifiable value, example "did method meets 60% of this X criteria" OR "this DID method has been benchmarked and tested at 100 txn/sec and 10mil users".
The text was updated successfully, but these errors were encountered: