Trust Drives Opportunity

Wednesday, October 17, 2012 @ 08:10 PM gHale


By Nicholas Sheble
“There is significant, new economic value in our general theory of trust based on the combination of behavioral trust and computational trust,” said Virgil Gligor.

Gilgor shared his thoughts during his Wednesday keynote address entitled, “On the Foundations of Trust in Networks of Humans and Computers” at the 2012 ACM Conference on Computer & Communications Security in Raleigh, NC. ACM is the Association for Computing Machinery.

RELATED STORIES
Firewall Costs; Hidden Costs
ICS, SCADA Myth: Protection by Firewalls
ISASecure Means More Security
Flaw in Air Gap Philosophy

“More trusting countries have higher gross domestic products (GDPs),” he said before delving into his ideas relating economics, sociology, psychology, computer science, and the networked world of online transactions and intelligence in which we live.

Gligor is a professor of Electrical and Computer Engineering at Carnegie Mellon University. He received the 2006 National Information Systems Security Award jointly given by NIST and NSA in the U.S.

He said a general theory of trust, which focuses on the establishment of new trust relations where none was possible before, would help create new economic opportunities. New trust relations would increase the pool of services available to users, remove cooperation barriers, and enable the “network effect” where it really matters, at the application level.

Gligor’s new theory of trust would not generate false metaphors and analogies with the physical world.

“The email trust model is an example of false expectations: The widespread user expectation that electronic mail would mirror the trust model of physical mail (e.g., authenticity, confidentiality, non-repudiation of receipt, delivery in bounded time) has misled many unsuspecting users into accepting spam, misleading ads, and malware.”

“In contrast, the trust model of eBay follows a well-established, traditional human trust example: It establishes trust relations based on reputation and to counter inevitable protocol non-compliance and trust failures, it uses insurance-based recovery mechanisms.”

To Gligor it’s important that security research should enable and promote trust-enhancement infrastructures in human and computer networks like the trust networks that exploit established social relations.

A general theory of trust in networks of humans and computers must reside on both a theory of behavioral trust and a theory of computational trust. This argument comes out of the increased participation of people in online social networking, crowd sourcing, human computation, and socio-economic protocols.

Gligor explained the link between computational trust and behavioral trust this way: “Consider the receiver to be the ‘trustor’ or ‘investor’ and the sender to be the ‘trustee.’ The trustor incurs a cost by clicking on a link to the trustee. This cost, for example, could be the cost of executing the action or opportunity cost (of not clicking on a different link). Yet the trustor willingly transfers value (pays) to the trustee. In addition, the market may amplify the trustor’s transfer, as it may help the trustee monetize the trustor’s click. The trustee can choose to transfer value back to the trustor, as it expands resources to respond correctly to the trustor’s request, instead of cheating, i.e., instead of providing an arbitrary response, if not a malicious one, a partial response, or no response at all. However, if the trustor anticipates that the trustee will cheat, the trustor would avoid contacting the trustee (i.e., by not clicking on the link for a website), and no value would be exchanged. In summary, the act of trusting has the following three possible value outcomes:

1. If the trustor trusts the trustee and the trustee is trustworthy, then the trustor and trustee are better off than before executing the protocol, i.e., cooperation pays off.
2. If the trustor trusts the trustee and the trustee is untrustworthy, the trustee is better off and the trustor is worse off than before, i.e., trustee has strong incentive to cheat in the absence of a mechanism that protects the trustor.
3. If the trustor suspects that the trustee will cheat and hence, does not initiate the protocol, no value exchange happens.

By building on computational techniques (e.g., cryptography, verification, fault-tolerance) that give us trust among computational entities (e.g., computers and networks) we are left with non-computational entities. Thus now when we talk about trustors/receivers and trustees/senders we are talking about humans: The person (sender) who wrote the code to be run remotely or who wrote the Wikipedia page and the person (receiver) who requested the message contents, e.g., by clicking on the webpage.

“This means we need to look at theories that explain trust relations among humans,” Gligor said.
Nicholas Sheble (nsheble@isssource.com) is an engineering writer and technical editor in Raleigh, NC.



Leave a Reply

You must be logged in to post a comment.