Test-of-Time (ToT) for Research Papers: Some Papers Rocket, Some Papers Crash, and But Most Never…

In research, the publishing of high-quality papers is often critical for the development of a research career:

Test-of-Time (ToT) for Research Papers: Some Papers Rocket, Some Papers Crash, and But Most Never Go Anywhere

Apple [here] Spotfiy [here]

In research, the publishing of high-quality papers is often critical for the development of a research career:

“I am an academic. It’s publish or perish.” Daniel J Bernstien.

But often we measure the work in terms of quality rather than quantity. One high-quality research paper is probably worth more than the millions of papers published in predatory journals. A great researcher should be able to measure the quality of their work by the known impact and contribution of their research papers, and not by citation count or journal impact factor. In fact, review papers often contribute little to the development of new methods, but are some of the most highly cited papers.

A research paper thus has a life. Authors might have a dream that their work is going to fundamentally change a given field, but it ends up never being read much and withers. Overall, most papers just bob along with a few citations in a year, and where you are lucky if you get more than 10 citations. An academic often follow the impact of their papers on Google Scholar, and which can give you an idea of whether their work is rising or on the wain. If you are interested, here’s mine showing a nice exponential rise over the past few years:

Some papers might rocket with many initial citations, and where researchers cite them heavily, but then either the research area just dies off with a lack of interest, or problems are found with it. Isogenies within post-quantum methods is one example of this, and where a single crack on SIDH (Supersinglar Isogeny Diffie-Hellman) stopped some of the advancements in the field [here]:

Up to that point, isogenies were the poster child and the great hope for competing with lattice methods. While they were still slow, researchers were gearing up their research to address many of their performance weakneses. They were much loved, as they used elliptic curves, but one paper stalled the isogeny steam train. I do believe they will return strong, but it will take a while to recover from such a serious crack. Cryptography is often about reputation, and a single crack can bring the whole method down.

Other papers, though, can be slow burners. The core papers in ECC (Elliptic Curve Cryptography), for example, did not take off for a few years after the work was published. When Neal Koblitz published his paper on “Elliptic curve cryptosystems” in 1987, it was hardly cited, and few people picked up the potential to replace RSA signatures. In 1997 (10 years after the publication of the paper), it is still only achieved 41 citations. But things really took off around 2005, and especially when Satoshi Nakamoto adopted ECC for Bitcoin around 2009. It now sits at nearly 400 citations per year, and where ECDSA and EdDSA have made a significant impact in replacing our cumbersome RSA methods:

Test-of-Time (ToT) Award

Now Chris Peikert, Brent Waters, and Vinod Vaikuntanathan (Via-kun-tan-athan) have been awarded the International Association for Cryptologic Research (IACR) Test-of-Time (ToT) Award for a paper entitled “A Framework for Efficient and Composable Oblivious Transfer” and presented at the Crypto 2008 conference [here][1]:

Overall, the Test-of-Time Awards is awarded to papers published over 15 years ago, with the three IACR general conferences (Eurocrypt, Crypto and Asiacrypt).

The developed framework integrates “universal composability” and which provides strong security properties. Basically, a protocol P1 is secure if another protocol (P2) emulates P1, and where it is not possible to tell the two apart. It introduced a simple method of “dual-mode cryptosystem”.

The work has been fundamental in creating Oblivious Transfer protocols, and which are used in Multi-Party Computation (MPC). A great advancement of the paper is in the usage of Learning with Errors (LWE) — and which is now used within lattice cryptography methods. The paper has since laid a foundation for lattice cryptography.

As with the ECC method, the paper was a slow-burner [here] with only 11 citations in 2008, but rose to more than 10 times that number:

MPC

So, let’s see if we can build a model where we can securely distribute value and then get our nodes to perform the result of a calculation. None of the nodes should be able to compute the result without the help of others, and where Trent is trusted to distribute the inputs, watch the broadcasts, and then gather the results. For this, we can use Shamir secret shares, and where a value can be split into t-from-n shares and where we need t shares to rebuild our value.

So, we could distribute a 2-from-3 to Bob, Alice and Eve, and they Bob and Alice, or Alice and Eve, could rebuild the value back again. So let’s say we have two values: x and y, and we want to compute x×y. We then initially start with n parties, and where we define a threshold of t (the minimum number of shares required to rebuild any value. Initially, Trent (the trusted dealer) splits the input values of x and y into shares:

Sharesx=x1,x2,…xn

Sharesy=y1,y2,…yn

Next, Trent sends one share of each to each of the nodes, such as xi and yi to node i. Each node then must gather at least t shares for the nodes, and then aim to add to its own share. Each node is then able to rebuild the values of x and y, and then compute x×y. Trent then receives all the results back and makes a judgement on the consensus. If we have 12 nodes, then if there are at least eight nodes that are fair, the result will be the correct one.

Here is the code [here]:

package main
import (
"fmt"
"github.com/codahale/sss"
"os"
"strconv"
"encoding/hex"
)
func mult(subset1 map[byte][]byte, subset2 map[byte][]byte) int {
a_reconstructed := string(sss.Combine(subset1))
b_reconstructed := string(sss.Combine(subset2))
a,_ := strconv.Atoi(a_reconstructed)
b,_ := strconv.Atoi(b_reconstructed)
res:=a*b;
return(res)
}
func add(subset1 map[byte][]byte, subset2 map[byte][]byte) int {
a_reconstructed := string(sss.Combine(subset1))
b_reconstructed := string(sss.Combine(subset2))
a,_ := strconv.Atoi(a_reconstructed)
b,_ := strconv.Atoi(b_reconstructed)
res:=a+b;
return(res)
}
func sub(subset1 map[byte][]byte, subset2 map[byte][]byte) int {
a_reconstructed := string(sss.Combine(subset1))
b_reconstructed := string(sss.Combine(subset2))
a,_ := strconv.Atoi(a_reconstructed)
b,_ := strconv.Atoi(b_reconstructed)
res:=a-b;
return(res)
}
func get_shares(shares map[byte][]byte , k byte) map[byte][]byte {
subset := make(map[byte][]byte, k)

for x, y := range shares {
fmt.Printf("Share:\t%d\t%s ",x,hex.EncodeToString(y))
subset[x] = y
if len(subset) == int(k) {
break
}
}
fmt.Printf("\n")
return(subset)
}

func main() {

a:= "10"
b:="11"

n1:=5
k1:=3
argCount := len(os.Args[1:])
if (argCount>0) {a = (os.Args[1])}
if (argCount>1) {b = (os.Args[2])}
if (argCount>2) {k1,_ = strconv.Atoi(os.Args[3])}
if (argCount>3) {n1,_ = strconv.Atoi(os.Args[4])}
n := byte(n1)
k := byte(k1)
fmt.Printf("a:\t%s\tb: %s\n\n",a,b)
fmt.Printf("Policy. Any %d from %d\n\n",k1,n1)
if (k1>n1) {
fmt.Printf("Cannot do this, as k greater than n")
os.Exit(0)
}
shares1, _:= sss.Split(n, k, []byte(a))
shares2, _:= sss.Split(n, k, []byte(b))

a_subset:=get_shares(shares1,k)
b_subset:=get_shares(shares2,k)

res1:=mult(a_subset, b_subset)
res2:=add(a_subset, b_subset)
res3:=sub(a_subset, b_subset)
fmt.Printf("\na*b= %d\n",res1)
fmt.Printf("a+b= %d\n",res2)
fmt.Printf("a-b= %d\n",res3)

}

A sample run is [here]:

a:  10  b: 11
Policy. Any 3 from 5
Share: 5 fe87 Share: 1 8bd2 Share: 2 16c7
Share: 2 e47a Share: 3 db58 Share: 4 1a9b
a*b= 110
a+b= 21
a-b= -1

and:

a:	9999	b: 9998
Policy. Any 3 from 6
Share: 1 968ada76 Share: 2 44fc9b0c Share: 3 eb4f7843
Share: 4 6d4bf67a Share: 5 1cbaf095 Share: 6 3ef251e3
a*b= 99970002
a+b= 19997
a-b= 1

Conclusions

The paper by Peikert, Vaikuntanathan, and Waters laid the ground for some many areas, including MPC and lattice-based cryptography. After 15 years since it has been published, it has been referenced over 821 times, and is a highly recommended read. And, so, don’t measure the initial impact of a paper by the number of citations it receives after a year or two — as its time may yet be to come.

For ECRs … have faith in your work … if you keep your focus, your work will get noticed. If not, you perhaps have the wrong focus or in the wrong field.

References

[1] Peikert, Chris, Vinod Vaikuntanathan, and Brent Waters. “A framework for efficient and composable oblivious transfer.” Annual international cryptology conference. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008.

[2] Koblitz, N. (1987). Elliptic curve cryptosystems. Mathematics of computation, 48(177), 203–209.