Frequently Asked Questions

Read our brief introduction to electronic voting to start with, or our rationale for using the MIT license if you wonder about usage.

For technical questions about our softwares, we refer the reader to our whitepapers at the downloads page. Here we answer general questions, questions about the techniques we use, and questions about how our softwares compare to those of others. Contact us if you have further questions or find any error or outdated information below!

Cryptography

  1. What is a protocol?
  2. What is a definition of security?
  3. What is a proof of security?
  4. Why is a proof of security a requirement to claim that a protocol is secure?
  5. Does it really matter in practice?
  6. What is an interactive, or non-interactive, proof?
  7. What is universal verifiability?
  8. What is special about the application of cryptography in electronic voting systems?

Acquisition of Electronic Voting Systems

  1. What is the best electronic voting system in the world?
  2. How do I know if my current vendor is trustworthy?
  3. Why does it matter who implements a system?

Using Verificatum Software

  1. Why should we trust you?
  2. We need somebody to install and run our system, can you provide that service?
  3. Do I need additional software to set up an electronic election?
  4. How do I know that the software is as good as you claim?
  5. Why are your webpages black and white?

Technical Aspects of Verificatum Software

  1. What is the difference between the old version of VMN licensed under LGPL and the current version licensed under MIT License?
  2. Why do you say that VMN is the only universally verifiable mix-net?
  3. Which primitives and cryptographic protocols are implemented in VMN?
  4. How fast is your mix-net compared to other mix-nets?
  5. Why doesn't the running time of decryption (and verification of decryption) increase with the number of mix-servers?
  6. Given the simple design of randomized partial checking, why do you use proofs of shuffles in VMN?
  7. Why don't you use the XYZ proof of a shuffle?
  8. How fast is your JavaScript library compared to other libraries?
  9. Why don't you use the XYZ technology to do ABC?


Cryptography

What is a cryptographic protocol?

A cryptographic protocol, or simply a protocol, is the technical term used in computer science to refer to a set of programs that interact with each other during execution.

A protocol can be described at different abstraction levels. In research papers abstract notation is used and many details are dropped to be able to express they key novel ideas succinctly. When implementing a protocol the omitted details must be filled in with great care. This is particularly important in the description of any verification that must take place.

What is a definition of security?

A definition of security captures a set of properties that must be satisfied by a protocol to be considered secure. Determining what these properties are is non-trivial for new protocols, and even for old and seemingly simple notions such as cryptosystems there are several variants.

It is important to understand that in this context, the word definition is reserved for mathematical definitions, e.g., a statement such as "an electronic voting system must be private", even if made more precise, is not a definition.

What is a proof of security?

Given a definition of security and a protocol description, a proof of security is a proof that the protocol satisfies the requirements of the definition. Again it is important to understand that the term proof is reserved for mathematical proof. The "security analyses" found in some papers are not proofs.

Such a proof is often only possible under a computational assumption, i.e., the mathematical statement that is actually proved is that either: (a) the protocol satisfies the definition, or (b) an algorithm can be constructed that solves a computational problem that with overwhelming consensus is considered infeasible to solve. A well-known example of such an assumption is factoring of integers, but in the context of electronic voting the discrete logarithm assumption is more common.

Why is a proof of security a requirement to claim that a protocol is secure?

It turns out to be very hard to assess the security of a protocol without developing a definition of security and a proof that the protocol satisfies the definition. This explains why it is almost impossible to publish a research paper about a new cryptographic construction at a reputable venue if it is not provably secure.

In fact, a complex protocol without a proof of security relative to a definition of security developed even by an experienced cryptographer is likely to have flaws or overlooked features.

In some cases these flaws are inconsequential, but they can also be devastating. For mix-nets the track record is particularly bad. Fully practical attacks have been found for several protocols and serious flaws have been found for more protocols.

Does it really matter in practice?

Yes. We found vulnerabilities in the mix-net used by Scytl in Norway 2013 [KTW12]. The Norwegian government requested that the shuffling part of the construction was replaced by our software for the 2013 election.

We found a much more serious attack on the Randomized Partial Checking scheme [KW13]. A single corrupted server could replace the complete output arbitrarily without detection. Read more about this in the answer to the question about this protocol.

These attacks would show up in any attempt to develop a proof of security. Indeed, we found the second attack in this way. It is naive to believe that these attacks would be hard to find for a nation state bad actor studying the systems. Thus, it is prudent to assume that these attacks were already known at least in the latter case.

We remark that there is a small set of notions that are not provably secure, but are still considered secure, e.g., symmetric cryptosystems and hash functions. These are very different from protocols in that: they have a well understood and simple purpose, the techniques used have been studied for many decades, and each specific construction that is considered secure has been studied thoroughly for years. These constructions can be considered to be computational assumptions in their own right.

What is an interactive, or non-interactive, proof?

An interactive proof is a cryptographic protocol executed by a prover and verifier for a given statement such that at the end the verifier outputs a verdict on the veracity of the statement. The security property is that the verifier only accepts a false statement with very low probability.

Interactive proofs are used in many protocols to ensure that all parties follow the protocol, i.e., they are applied in mix-nets to ensure that ciphertexts are permuted and decrypted to form the output.

A handful of different techniques can be used to modify the protocol in such a way that the prover only sends a single message to the verifier. The resulting protocol is then said to be non-interactive. This is very important in practice for several reasons.

What is universal verifiability?

In electronic voting a protocol is said to be universally verifiable if the protocol produces a non-interactive proof that can be verified by any observer.

A necessary condition for a protocol to be universally verifiable is that it is provably secure, since otherwise the checks done by the observer may not guarantee anything at all. In other words, the first step of the observer is to verify a mathematical proof that the verifier is indeed a verifier of a non-interactive proof.

In a real world implementation the abstract protocol is refined into a protocol with concrete values, representations, and encodings. This induces a corresponding real world non-interactive proof of security that is defined in terms of files containing bytes or bits and not, e.g., abstract groups. Thus, an additional necessary condition for an implementation of a protocol to be verifiable is that there is a rigorous description of the proof. (Needless to say developing a correct description is hard, so a natural expectation is that the description has already been used by independent parties to implement verifiers to be considered complete.)

What is special about the application of cryptography in electronic voting systems?

The classical application of cryptography is for two mutually trusting parties to communicate privately. For example, two bank offices may communicate client information. In this application the client can also verify that any transactions are correct by inspection.

In electronic voting the number of parties is larger, but more importantly the trust model is different and voters can no longer verify that their input was processed correctly directly.

Specifically, in a traditional election no single party is trusted to preserve the privacy of voters. Correspondingly, there are multiple mutually distrusting parties taking part in electronic voting systems. Third parties merely trust that too few of them collude to violate the privacy of the voters. This is a weak assumption in practice.

Similarly, no single party in traditional voting systems is trusted to tally the votes correctly, indeed counting takes place in groups, there are often audits by authorized parties, or the general public is invited to observe the counting. In many electronic voting system so-called zero-knowledge proofs are provided by the tallying parties that can be verified by any independent third party. Other parts of the voting process can be verified using other methods. Read our brief introduction to electronic voting for more information.

Acquisition of Electronic Voting Systems

What is the best electronic voting system in the world?

One cannot say what the best paper-based voting system is. It depends on what the election scheme looks like, the budget, the democratic culture, and how different threats are ranked. This applies to electronic voting systems as well.

However, for specific components or modules such as our softwares the question is meaningful. We are not aware of any software with the same purposes that are comparable with ours.

How do I know if my current vendor is trustworthy?

All vendors and researchers claim that their solutions are secure, but unfortunately this is not necessarily true and some people claiming to be experts are not.

You should not trust us or anybody else because we say so, you should ask for proofs in the form of academic credentials, analyses of protocols, and analyses of the code. Although there are exceptions, generally an expert is somebody who holds a PhD in cryptography or a closely related field and publishes at high ranking venues in these areas.

Cryptographers predominantly publish their research in conferences organized by the International Association for Cryptologic Research. Look at the publications of people claiming to be experts by googling their name and "dblp". What credentials do the software engineers have? Which third parties have analyzed the code? How mature is the code? Is there comprehensive documentation? Are there benchmarks? Are there any specifications? These and similar questions should every vendor be able to answer immediately. We post them online.

Why does it matter who implements a system?

Cryptography and computer security are fragile in the sense that a small error can have catastrophic consequences. Inexperienced engineers or programmers that lack a deep understanding can, and do, make such mistakes, e.g., Lewis, Pereira, and Teague have reported such flaws in the system developed by SwissPost. This risk is drastically reduced when experts design and implement systems.

If your vendor cannot show that they have a deep understanding of cryptography and software engineering, then you should not trust them to design and implement your system. It may sound harsh, but it is the overwhelming consensus among academics in computer security and we have good reasons for taking this position.

Using Verificatum Software

Why should we trust you?

As explained above you should not trust any vendor because they say that their products are secure. You should consider their credentials and consult independent parties to assess the expertise of your potential vendor and their software.

We publish regularly at top conferences and all our protocols have been proven secure in their abstract form. We introduced the definition of security of mix-nets in the research literature. We have invented and published the core protocol needed to prove correctness in our mix-net (this has been adopted by several other projects in a naive form). A handful of people in the world have the same background as we do.

We have for years taught, and are teaching: cryptography, software engineering, and programming in various courses at KTH Royal Institute of Technology, our software has been analyzed by top academics in software engineering, quality, and verification, and we have addressed all their comments.

Some of our partners, e.g., Free & Fair, have even deeper knowledge of software engineering, quality assessment, and verification. We constantly work together and mutually share our expertise to the benefit of users.

We need somebody to install and run our system, can you provide that service?

We focus on advanced development, analysis, support, and consulting. Thus, we integrate solutions and provide support and consulting for and around our softwares on a case-by-case basis, but currently we do not provide complete services for electronic voting.

Nonetheless, we would be happy to consult you in the choosing process and we can recommend either our partners, or one of the large vendors depending on what is suitable. We have a broad network of contacts.

Do I need additional software to set up an electronic election?

Currently, we provide the two main cryptographic components, i.e., the mix-net and the client JavaScript cryptography library. We are implementing the remaining components. For critical elections we suggest that you consult somebody with practical expertise in setting up hardened and stripped servers if you are not able to do this yourself. Protection against denial of service attacks is another important consideration. This is out of scope of our softwares and our expertise, but we are happy to consult you in choosing a partner for such services.

How do I know that the software is as good as you claim?

The MIT license allows you to do any form of auditing, testing, or experimenting, so there is no need to trust us if you are qualified to assess the software yourself.

However, the software has been rewritten and refined for many years solely to improve quality. The first version was completed 2008 and we have done two extensive re-writes. All functionality is documented in manuals and usage information. All the code is documented. All the code is systematically analyzed with several independent static analyzers.

We have had the advice of professors and PhDs in software engineering, quality and verification to make sure that we write quality code. They have done software analysis with multiple tools and we have discussed and addressed their comments. A code auditing company has studied the code as well as large vendors and previous clients.

Moreover, our mix-net has a single external dependency to the GMP arithmetic library. The main author, Torbjörn Granlund, helped us write some of the low level routines in the GMPMEE library. Java code is compiled into byte code that is executed in a Java Virtual Machine (JVM). We have discussed some of our approach with the friends/authors of the JRockit JVM which was absorbed into what is now OpenJDK.

In short, we are world-leading experts on mix-nets. We know much about arithmetic code and we have had the advice regarding our arithmetic code from world leading experts. We take code quality very seriously and have repeatedly subjected our code to the analysis of top researchers from the relevant fields.

Zero vulnerabilities have been reported for our software despite that this project has been a high-profile target both in academia and in the industry at least since 2013. We welcome attempts to find attacks and will give proper credit if any is found.

Why are your webpages black and white?

We focus on content, readability, and functionality. Please let us know if something is hard to find or hard to understand.

Technical Aspects of Verificatum Software

What is the difference between the old version of VMN licensed under LGPL and the current version licensed under MIT License?

There are too many changes to list them here. We have done an extensive re-write of much of the code between the LGPL version and the current version. The current version provides more functionality and is much more secure and robust by default. It has been optimized in several ways and elliptic curves are implemented at the native level. Some data formats have been slightly modified and of course the documentation is more comprehensive and correct. The code quality has been improved drastically and the user interfaces have been improved.

We are not aware of any security critical bugs or vulnerabilities in the old LGPL version, but we recommend users that consider using it to compare it to the current version before they make up their mind. We do not support the old LGPL version.

Why do you say that VMN is the only universally verifiable mix-net?

It is of course wrong to claim universal verifiability, or any verifiability at all for that matter, without a proof of security relative to a rigorous definition of security. Surprisingly, some vendors, and even researchers, pitch such proposals as "verifiable" anyway.

There are other mix-nets that use non-interactive zero-knowledge proofs in an attempt to be universally verifiable, indeed, we know of at least one research prototype that has implemented the proof of a shuffle that we invented, namely UniVote.

However, no other implementation has a formal specification of the format of the non-interactive zero-knowledge proof of correctness. Our description is mature and several years old and has been used by multiple undergraduates to implement stand-alone verifiers without access to the source code. Thus, we are confident that it is not only correct, but also readable.

We think it is wrong to claim universal verifiability if the protocol is not provably secure or if an independent auditor needs the source code to implement a verifier.

Which primitives and cryptographic protocols are implemented in VMN?

A list of the main protocols are listed here, but please keep in mind that for some of the cited papers there is follow-up work that improve the constructions, broaden applications, and perform a more careful analysis. We have chosen to cite the papers that introduce the core ideas. Furthermore, we have modified the implementation of several of these protocols for efficiency and to allow us to write better code.

Research is a group effort. We remind the reader that for every construction that stands the test of time, there are many more which served as stepping stones.

How fast is your mix-net compared to other mix-nets?

The short answer is that it is fast enough and that you can run the benchmarks yourself on your particular setup. However, we also provide a thorough heuristic complexity analysis and benchmarks in a whitepaper on our downloads page. There may be other implementations of comparable speed, but we are not aware of any other implementation with a serious benchmark so it is hard to give a more precise answer to this question.

It is important to understand that the claims made in research papers give very poor estimates of the actual running time (we are guilty of having made such claims in the past), and this applies to research prototypes as well.

Examples of sources of errors are: (1) much of the computations performed by provers and verifiers in zero-knowledge proofs can be parallelized (sometimes by non-trivial rewriting of the protocol), (2) different exponentiations are more or less costly and some can be done in advance, (3) failure to take communication, parsing, and validation into account, (4) handling memory in a way that is infeasible in a complete implementation.

Because of this we refuse to compare our protocols or our implementation with other protocols or implementations (more than in a casual discussion) unless they provide proper benchmarks for a complete mix-net. Micro-optimizing a small part of a software system without attention to software quality, functionality, robustness, or simplicity of documentation, is always possible.

Why doesn't the running time of decryption (and verification of decryption) increase with the number of mix-servers?

We have parallelized the computations in such a way that combining multiple proofs into one can be done almost for free. Thus, all servers do their work in parallel and the verifier effectively verify a single proof.

Given the simple design of randomized partial checking, why do you use proofs of shuffles in VMN?

Since this is a common question, we provide a long answer for easy reference. Randomized partial checking (RPC) was introduced by Jakobsson, Juels, and Rivest [JJR02]. In an attempt to prove its security, we found a fatal flaw in the original version of RPC that allowed a fully practical attack to replace all votes without detection by corrupting a single mix-server [KW13]. This was, e.g., applicable to Civitas.

This should serve as a warning to people that think "there are no known attacks" is good enough. RPC was proposed with a probabilistic argument by world-leading cryptographers and used and implemented for 10 years before our analysis. We also believed that it was secure and intended to prove its security.

Although RPC is a beautiful idea and it can be realized correctly using non-malleable/CCA2 secure cryptosystems, the result is still a protocol where each server can replace a vote with probability at least 3/4. This means that the definition of security is non-standard and this influences the overall analysis.

A serious problem is that it is hard to implement a verifier in such a way that it can be trusted by everybody. This is why it is considered important to have universally verifiable proofs. Unfortunately, the usual Fiat-Shamir heuristic does not work well with protocols such as RPC. In fact, it means that with work proportional to (4/3)k, each server can replace k votes. Even a weak adversary can replace 100 votes, and each server can do this independently so the number grows with each corrupted server. We do not think this is acceptable.

Furthermore, RPC is still incorrectly applied by both vendors and researchers using the El Gamal cryptosystem instead of a suitable CCA2/non-malleable secure cryptosystem, and (surprisingly) claimed to be secure. We have shown that there exists no blackbox proof of security for such a protocol. Thus, we doubt that anybody will find a proof even if one exists in the foreseeable future.

With proofs of shuffles there are no subtle tweaks of definitions or unclear claims. There is simply no way to change even a single vote without detection, and anybody can verify this. We think this is a much more prudent approach, and a quick look at the benchmarks for our mix-net shows that it is fast enough.

The fact that multiple undergraduates have successfully implemented working, albeit not top quality, verifiers on their own show that the claims that it is too complicated in practice are wrong.

Why don't you use the XYZ proof of a shuffle?

Neff, and independently Furukawa and Sako, published the first proofs of shuffles [N01,FS01]. Although our proof of a shuffle is easier to understand, there is nothing wrong with either of them and both are fast enough if implemented carefully, but both of them are patented. There are other proofs of shuffles as well. In particular, there are proofs of shuffles that are provably secure in the common reference string model, but they require special non-standard elliptic curves and a common reference string. It is more prudent to use standard elliptic curves and the Fiat-Shamir heuristic in practice.

Groth's work [BG12] is interesting from a theoretical point of view, but it is much more difficult to understand. This is a serious drawback, since communicating trust is central. It is also more complicated to implement which is a concern for independent verifiers.

The potential speed gains are also much smaller than most people realize. To see this, first note that the prover in our protocol is more or less as fast as his (the numbers for our proof of a shuffle given in the published paper are wrong). Thus, one can at most hope to reduce the cost of verification. However, due to re-writing and parallelization, the verifier in our proof of a shuffle runs partly in parallel with the prover, and all mix-servers of course perform their verifications in parallel, so this effect is small. Furthermore, the cost of joint decryption remains unchanged and the running time of other parts of the complete protocol starts to matter when the zero-knowledge proofs are optimized.

A less interesting problem with Groth's proof of a shuffle is that it uses similar ideas to Neff's original shuffle and we are not convinced that patents do not apply.

Thus, to summarize, using other proofs of a shuffle would: make it harder to understand and implement, only give a small improvement in speed, and/or the legal situation is unclear.

How fast is your JavaScript library compared to other libraries?

The short answer is that it is fast enough, but you can also look at the benchmarks on our downloads page or run the benchmarks yourself. Benchmarking JavaScript is notoriously hard since there are multiple platforms with different profiles and garbage collection makes it difficult get realistic results when running several consecutive benchmarks for different parameters.

Why don't you use the XYZ technology to do ABC?

This is a common question that we get for many aspects of our softwares and the answer is simple. Most of the software developed by the open source community, and the industry, is not implemented by experts in security and it is not implemented or maintained with security critical applications in mind. It is not audited, not documented, manuals are partial and not updated, and it is in constant development.

Such software is also unnecessarily complex, since it provides powerful and flexible functionality with a wide array of applications in mind. Thus, it is perfect for most ordinary applications, even applications that have security features, but it was never developed with security critical applications in mind where the potential adversary is a nation state bad actor.

The use of electronic voting systems puts democracy at stake. Thus, each external dependency should be considered a potential security issue. One of our goals is to minimize the number of external dependencies and rather write properly tested self-contained code with a well-defined purpose that converges to a static state than worrying about the next patch of a software library that is actively developed by people that we do not know.

This applies to all applications, libraries, and tools that may have security repercussions such as: the latest IDEs, testing frameworks, REST frameworks, serialization and configuration formats, etc. There is rarely any need for these tools, or they are easily replaced by old tools with nearly static code bases. A notable exception are static analyzers that do not change any code.

For example, whereas in normal production software development it makes sense to use Ant or Maven to build Java code and the GNU toolchain to build C code, it is from a security point of view better to use the GNU toolchain to build the Java code as well, since the GNU toolchain is more mature, more portable, and using a single tool instead of two reduces the amount of code that we must trust.

Relying on some tools and software is inevitable, e.g., we have to run our software in a JVM and in an operating system using some shell, so one cannot say that it is wrong to use any particular tool or software, but one can say that using many of them is bad security practice and some are worse than others.

Our JavaScript software shows that there is no need to employ elaborate frameworks or tools, it would merely degrade security and robustness. The various minimizer tools may change the semantics of the software, so we prefer the more conservative approach of simply stripping comments and redundant newlines. The testing code we need is basic, since only arithmetic routines are checked, so it is hard to justify using any powerful testing framework. We may however port the software to, e.g., TypeScript or JASM, in the future, if this seems worthwhile.