TL DR: We are working to address the problem of verification on Golem network. We will achieve it by checking small parts of computed tasks for accuracy. We will be publishing more on our research results over time.
Ok, so here’s the thing: one of the main issues with building apps on Golem is the verification problem. Since we don’t know what the answer should be for our task (otherwise, we wouldn’t need Golem to perform it…), there is no simple method of verifying if the answer provided by the network is correct. It wouldn’t be a problem in a perfect world, where everyone is honest — but here, it is quite a big one.
It’s not only that there could be someone who wants to spoil the game and cheat just for fun. In our Golem world, there is an explicit economic incentive to work less (possibly doing more tasks in the same amount of time). But, because we have the Golem world under our control, we can change the rules of the game to suit us better.
Let’s look at the very simple example of a spoiled version of Tom Sawyer, who is given the task of painting Aunt Polly’s fence. We can imagine a conversation going like this:
- Tom, I would like you to paint this very long fence.
- Ehh, ok. But first I need some paint.
- Oh, I don’t have time to go to the shop. But, if you buy it yourself, and paint the fence honestly — I will check it — I’ll give you back the money and we can go eat some ice cream. Since I’ve painted this fence so many times, I know exactly how much paint you will need.
So, in this situation, Spoiled Tom sees an opportunity to earn some cash. If he would paint only 2/3rds of the fence, and auntie wouldn’t notice it, then not only does he finish earlier, but also earns 1/3rd of the paint cost.
We can imagine the fence being very long, so checking every single slat of it would be a very tedious task. There is no way auntie would do that.
So, what should she do?
Spoiler alert: the surprising answer is that she only needs to check 2 slats, not matter how long the fence is.
The (somehow informal) proof goes like this:
Let’s say that fence is 100 slats long, there are 100 units of paint needed, every slat requiring one unit.
What happens if she only checks one single slat?
Well, let’s imagine ourselves in the situation of Tom. Obviously, he has to paint at least one slat honestly (since at least one will be checked).
So he painted the first slat honestly. What’s next?
If he cheats from now on, there is 99/100 chance that he will be caught (and, in consequence, lose his 1/100 of the cost of the paint he already invested) and 1/100 chance that he will not be caught, and then he earns 99/100 of the cost.
So, the expected reward in that case is 0.
We can see, that in other cases, the calculations are the same. If he paints kslats honestly, his expected reward is:
So, in the case of checking only one slat, we see that there is an equal incentive to cheat as to not cheat. So now, we may add a really tiny weight to the honest side and break that balance. As you’ve probably already guessed, that this small weight will be checking an additional, second slat. As it slightly increases the probability of being caught, if Tom cheats (it also decreases the probability of his not being caught), it makes the expected value negative for every case where only a subset of the fence is painted honestly.
Still, this was only some intuition, and if you are not convinced, a paper with the full proof will be available soon.¹
So, back to the story — what’s the moral of it? Well, first, we should say that every task in Golem is divided into many subtasks, to be processed by the distributed network. One of the verification methods we can employ is to do a small number of these subtasks by ourselves, and compare results with the output of the network (ie, in the case of rendering, we render a small sample of pixels from the image and compare them with the network output)². The question we had to answer was — how many of these subtasks have to be verified? If the answer “was something like “half of them” — there would be little reason to even send the task to the network, because we would still need to compute a large part of it by ourselves.
What we are seeing here, with some assumptions about the “rationality” of the provider (of course, Spoiled Tom doesn’t want to lose his money!) and some other technical assumptions explicitly stated in the proof, the number of checked subtasks is not only constant with respect to the size of the task, but the constant itself is very small. So, we can dispatch a task with 10⁶ subtasks and only check 2 of them³.
If you have any comments or doubts, send them to us, to firstname.lastname@example.org
 It will be one of many our research resources we are currently making publicly available, to make the company and process of developing Golem even more transparent. More about it here.
 There is more to it, since rendering is in part a non-deterministic process, there has to be some more effort put into comparing two sets of pixels than just “are they exactly the same”, but it is beyond the scope of this article.
 Things get more complicated when we are sending a task to many providers. But, it appears that this is not only not-a-problem, but an even better situation. One solution is to use some kind of so called mechanism of lottery — but, as before, it is really a whole different topic.