To Those Who Face A Rejection … “I have read with dismay your presentation ‘Draft Ethernet…

You may be an academic who has failed to receive funding for a major grant proposal, or a researcher who’s paper has been trashed, or…

To Those Who Face A Rejection … “I have read with dismay your presentation ‘Draft Ethernet Overview’”

You may be an academic who has failed to receive funding for a major grant proposal, or a researcher whose paper has been trashed, or budding innovator who has been turned down because someone does see your market potential. If so, read on …

There are so many people around who fail to see potential, and it is often easier to say “No!”, than “Yes, but here’s some advice”. So for all the people how have received paper rejections or have been rejected within their innovations, here is the perfect example of why you must strive on:

The Internet was build by Ethernet … okay, IP and TCP helped a good deal, but it was Ethernet that really made it all work. Without that last connection, we would still be running modems onto the telephone network. Ethernet has been a truly transformative thing in our world. The first few lines are crushing:

The memo shows how not to write feedback:

I have read with dismay your presentation “Draft Ethernet Overview”

If there’s a word not to use in the first line of your feedback, please don’t let it be “dismay”. The reader will already be crushed by this point, and will probably not even get to the next sentence. I must admit, I have done this often.

The main problem with the Xerox memo, is that the person writing it was totally out of their depth in analysing the innovation. They are looking at the definitions of formal methods. From reading it, it looks like it is written by someone who focuses on formal engineering methods and the associated maths. For Ethernet, there was little in the way of formality, it just had to get some data from one machine to another one. And the problem here is that Xerox probably had an engineering methodology to assess innovation, and without it, any innovation was likely to fail. In research papers we sometimes see this now, and where amazing contributions are sometimes knocked back because of a lack of citations, or that the main contribution is not clear.

Quality is better that quantity

I love the old “classic” papers which basically just outlined a new method and didn’t give you endless numbers of references. One of my favourite papers is by Fiat and Shamir [here][method]:

The chances of this getting through the review process now are probably low, as it has very few references and one of them is to the author’s work:

But it is the defining paper for Zero-knowledge Proofs.

Many people think that undergraduate degrees are unlikely to create ideas that are ground breaking, but it is often not the case. Ralph Merkle, in 1974, pitched the idea of public key encryption to his Professor in a coursework definition, and defined a method of key exchange (known as Merkle’s Puzzles). It was rejected by professors, and was finally resurrected when Ralph heard of the work of Martin Hellman and Whitfield Diffie at Stanford. The text below says:

Project 2 looks more reasonable maybe because your description of Project 1 is muddled terribly

Ralph then submitted his idea as a paper to the Communications of the ACM, but was rejected, as his paper did not have a formal literature review or references to other work. For Ralph he reasoned that there couldn’t have been references as there was no other work like this around. The paper was accepted three years later, but this time it has references to other work.

In a lesson for any modern researcher, in just two pages, Robert McEliece, in 1978, outlined the McEliece cryptosystem [paper][method]. Overall it is asymmetric encryption method (with a public key and a private key), and, at the time, looked to be a serious contender. Unfortunately for the method, RSA became the King of the Hill, and the McEliece method was pushed to the end of the queue for designers.

With ever inflated abstracts and references, the paper just seems to cover what it needs to with just a few passing references to some classic papers:

It has basically drifted in the 38 years since. But, as an era of quantum computers is dawning, it is now being considered as it is immune to attacks using Shor’s algorithm. In this page, I’ve avoided going into detail on the actual method, but have provided the code for you to test it out.

Writing a paper

A core part of being an academic is to publish papers. It is the thing that we are often measured on. When we recruit we often look at the quality rather than the quantity of someone’s research output. One good paper which contains a strong scientific contribution is often better than a whole lot of papers which add little to current work. Personally, as a reviewer, I most often reject papers with the following ordered list:

  1. Poor English and grammar.
  2. Lake of focus and no definition of the problem statement, and how paper addresses this.
  3. Little contribution to existing methods.
  4. Lack of definition of the key contribution.
  5. Lack of results.
  6. Lack of formality.
  7. Poor definition on figures and diagrams.
  8. Poor coverage of the existing literature.

Some reviewers can even just quickly look at a paper and decide that it is a bad paper. So can a machine learn how to quickly review a paper, and thus for us to determine the key factors that reviewers look for? For this we turn to new work on a classifier based on the visual appearance of the paper — defined as the gestalt of a paper [here]:

In their work, they took a wide range of previously accepted and rejected papers, and created a classifier that could reject 50% of the bad papers while only rejecting 0.4% of the good papers. Such a system — if it could work — would considerably reduce the workload of reviewers.

Within the work the authors define previous work in classifying research papers:

  • Administration. This analysed the basic administration process around the submission of papers such as the violation of anonymity, poor formatting, and being clearly out of scope. The correlation here is that it is likely to be that weak research teams will have a poor experience of the peer review process and make simple mistakes in their submission. A strong research team, though, is likely to have good processes for making sure that the papers are properly reviewed and also against fit the requirements of the submission system. As an editor I get to see some weak submissions, and which have little chance of ever being accepted. A one minute glance at a paper can tell you if it has little chance of success, and poor papers will often be rejected at this stage for their poor compliance with the submission system.
  • Text-based methods. These involve automated ways of grading a paper and could involve checks for grammar scores, spelling errors, usage of maths, usage of keywords, and so on. I have personally seen many reviews where the reviewer justifies their rejection on the basis of the poor grammar used and/or typos, I think this type of method has a solid base in classifying papers. An editor who sees a whole series of typos in the review comments, will often think the worst of the paper.
  • Visually-based methods. These involve methods that analyse the look and feel of the paper.

The methodology for the new method used papers accepted for nine conferences hosted by the Computer Vision Foundation (CVF). Unfortunately, they did not get access to the rejected paper, but used the ones which did not appear in the main conference, but were accepted for workshops.

For their method, the used the PDF2Image program to convert papers into an image for a 2x4 grid (for the first eight pages) and then compared workshop paper layouts to conference ones [dataset]:

After training Res-net-18 [here] for papers from 2013 to 2017, they then predicted accept/reject rates for 2018, and found that they were able to correctly reject 1,115 bad papers and only miss four good papers (out of 979 good papers). In the work a bad paper looks like this:

and a good paper:

Overall the placement of diagrams was often key within the classification, and especially putting an overall contribution graph at the start of the paper. The usage of tables/plots considerably helps the success of the paper, too. In the following we see the usage of an overview diagram on the first page:

The authors of the paper define that the paper may be difficult to read if there is no illustrated diagram on the first couple of pages.

Conclusions

Have faith in what you do, and learn, but do not let the people who say “No!” put you off.