to Write a Review?
J.P. Martin-Flatin and E. Lupu
1: April 2004
Version 2: January 2012
This page is primarily meant for
junior academics and Ph.D. students who are not yet used to writing
reviews for research papers.
- By accepting to do reviews for a journal, conference or
workshop, you accept to serve your research community.
- Doing reviews takes time. On average, an experienced
reviewer spends 2-3 hours reviewing a 12-page conference paper, and 6-8
hours reviewing a 15-page journal paper. If you are inexperienced, you
need to schedule more time. Before accepting to do reviews, be prepared
to dedicate enough time to them.
- As a reviewer, you fulfill two roles. First, by providing marks
that assess the quality of the paper, you help the editorial board or
program committee select the best papers. Second, by providing comments
to the authors, you help them improve the quality of their paper.
- By being asked to do a review, you are not given a carte blanche to
bash people anonymously. Be moderate in your criticisms.
- Most journals and conferences ask reviewers to give marks
between 1 (poor) and 5 (excellent). The idea is that a word such as
"good" means different things to different people, whereas a mark is a
bit more universal (Gauss curve centered on 3). In this section, we
assume that the top mark is 5 and the lowest is 1.
- In a review, you are asked to mark a paper according to
different criteria: technical quality, innovation, readability,
structure, scope, etc. If you find that a paper is poor, you do not
stress your point by setting all your marks to 1. In fact, doing so is
counterproductive: program chairs and journal editors usually interpret
it as meaning that your review is biased.
- A review that consists only of marks with no explanations
is worth nothing. Marks alone do not help select papers because they
can tell something either about the paper or the reviewer him/herself.
Marks alone also do not help the authors improve their paper. It is
therefore important that your marks be justified by appropriate
comments. If you send a review with less than 5 lines of comments, it
will most probably be ignored and someone else will have to redo it.
- Giving bad marks to a good paper does not increase the
chances that your own paper be accepted, but it does increase your
chances of not being asked again to do reviews in your research
community. Unreasonably harsh reviews are very easy to detect. For
annual conferences and workshops, the program chairs (who read all the
reviews) usually provide feedback to the next year's program chairs on
the quality of the reviewers.
- When you give marks, be careful to assess what each mark is
meant for. If you find that a paper is technically weak but fits in the
scope of that conference or journal, you should give it 5 for "scope"
or "relevance" and 1 for "technical". In this case, giving 1 for
"scope" would not emphasize technical weakness: it would expose your
inability to assess multiple criteria.
- Setting all marks to 3 and summarizing the abstract in the
"comments" does not hide that you did not properly review the paper. If
you do not have the time to review it, inform the program chairs or
journal editors, who will reassign it to someone else.
Paper vs. Technical Report
- A research paper is not a technical report. A technical
describes what the authors did. In addition to describing what they
did, a research paper should also justify the need for their work (motivation),
compare their approach with others (related work),
draw lessons of general interest (conclusion),
and provide the rationale for each of their technical choices. Simply
stating "we have chosen to use technology xyz" is not sufficient in a
research paper. Remember the catchphrase: "This is what we did and why
we did it this way."
- In a research paper, and particularly in a journal article,
the authors should identify the limitations on the applicability and
use of the work they have undertaken. A good place for it (but not the
only one) is the conclusion.
to Check in a Paper
- Check for technical accuracy. This does not mean that you
need to agree with the approach taken but that the approach should be
technically correct. The paper should present alternatives to the
technical choices that have been made when these choices are
- Check that the introduction is reasonably short. Long
introductions should generally be split into two sections.
- Check the problem statement and the motivation of the work
undertaken by the authors. You need to have clear answers to the
following questions: What problem is solved in this paper? What are the
alternative solutions to this problem? Why does it make sense to study
the solution considered in this paper?
- Check that the paper gives a clear overview of the work and
the approach taken before delving into technical details.
- Check the figures. Are they easy to read and understand?
Are they written in a foreign language? Do they use a standard
- Many people ignore that, by definition, conclusion =
summary + future work. Check that the summary includes lessons of
general interest. Check that the authors give directions for future
work (they often forget). Check that the conclusion is short.
- Check references and the study of related work, especially
for journal papers. Does the paper cite the seminal articles in the
area or does it miss important related work? (Note: "important" does
not necessarily mean "your own".) Suggest new references if you deem
that important ones are missing. Very often, authors reference general
papers on the technology that they are using (e.g., data mining), but
forget to mention papers that use this technology in the same context
(e.g., data mining in network management).
- Check that all references are properly formatted. Many
sloppy and give incomplete references or make editorial errors in their
references. Bear in mind that a growing number of papers are
automatically parsed and indexed, and the hit ratio of a given paper is
assessed by the number of times other papers reference it. Fuzzy or
sloppy references lead to incorrect hit rates, break reference
analysis, and can hamper research evaluation. They can also prevent
authors from retrieving relevant papers.
- Check against plagiarism. The best way to do this is to
read the literature in your field and to refuse to review papers
outside your areas of expertise. If you are suspicious, Google and
online digital libraries make it easy to check whether some material
has been "borrowed" from others. If you detect a clear case of
plagiarism, report it immediately to the program chairs or editors.
Many conferences and journals have explicitly stated policies against
plagiarism and the paper may be rejected on-the-spot without reviews,
thereby lightning the workload of other reviewers. If you suspect
plagiarism but are not sure, mention it in the "confidential comments"
section of your review, which is not sent to the authors.
- If you have the time, check against self-plagiarism: too
people publish the same idea several times. If you detect that the same
paper (or almost the same paper) has been published or submitted to
multiple conferences, workshops or journals, let the program chairs or
journal editors know immediately. If you suspect self-plagiarism but
are not sure, mention it in the "confidential comments" section of your
- Check that the paper does not include blatant commercial
technical literature is not a suitable medium for advertising.