To address the comments concerning the scoring of this contest and what some have seen as an unusual discrepancy between the individual judges and the scores given as compared to the scores given by other judges:
When we first decided to host contests here at AoB we knew that we would have to address the flaws that were common in all other Internet bonsai contests in order to raise the level, integrity, and professionalism of internet bonsai contests to fit with the philosophy of our project. We knew that in order to create contests that would draw the best artists in the world and set the standard for all future contests, we would have to address these flaws. After much discussion, the two major flaws we wanted to address first were as follows:
- Popular (popularity) Voting.
Most every forum that hosted a contest in the past used the members of the forum as the judges, allowing each member to vote on their favorite entry. While this may be fun, the results were never representative of actual quality.
Besides the fact that the majority of the voters were of the beginner to intermediate level, certainly not qualified to judge such a contest, it was quite common for voters to give high scores, not to the entry, but to the person they liked and low scores to those they did not. Some forums tried to address this obvious biased “judging” method by making the contest blind, meaning that the entrants name was hidden until after the judging. This only made matters worse, as friends would tell friends which entries were theirs and much guessing was done using backgrounds, photo quality, and other such clues to identify the other entries.
In some other rare cases, the moderators themselves conspired behind the scenes to assure that favorites or even their own entries received favorable scores.
To address this we decided to use only world-renowned bonsai artists to judge our bonsai contests, leaving even the editors here unable to influence the outcome. We selected judges that not only had the experience to judge, but that also who had the integrity, honesty, and reputation to judge fairly and without bias.
We also considered the possibility that someday, in spite of our best precautions, we might get a judge that was indeed biased. To counter this remote possibility we decided never to have less than three judges of the same caliber and to add all judges’ scores together, divide them by the number of judges, and use the resulting average score of all judges to determine the winners.
Thinking ahead to the possibility of ties, we decided that ties would be broken whenever possible by bringing in yet another world-renowned artist to judge the tied entries individually.
There is no other internet contest anywhere who uses the quality of judges, the number of such judges, or the other methods we do to assure a fair, impartial, and unbiased judging process.
- Lack of Transparency
We also noticed that except for votes that were posted, which in many cases could be altered by management or moderators, the inner working of the contest judging, entry selection process, and other pertinent information related to the contest, were kept hidden from the public. We felt that this lack of transparency, even if innocent, led many to question results, thereby casting a shadow of doubt on any results.
To address this we created judging sheets that the judges used to score and comment on entries. Once the results of the contest were released, we published each judging sheet for all to see. This allowed not only the public to see that all scores remained unchanged, but it also allowed the judges themselves to verify that their actual scores were used.
We created a scoring sheet in which all the judges’ scores were listed, as well as the math combining them and dividing them for the total average score used to determine the winners. Again, once the results were announced, this scoring sheet was published for all to see.
Besides these measures, every submission email, every email sent to us by the judges or entrants, and every communication the sponsors of the contest and the editors here are stored on our ftp server for future reference. Every discussion in the editors’ forum here at AoB are stored and were made available for review by the managing judge for this contest and they are open for review by our founders, advisors, and editors at any time.
We knew that in order to meet the high standards expected of us, to retain our high level of integrity, and to exceed the expectations bonsaists around the world have for these record setting contests, every step of the contest would have to be documented and available for review at any time. This is what AoB is all about.
Using these new ideas, as well as many others, we wrote a set of rules that has been praised by some of the most experience artists in the world. They have been used as a foundation for other contests by other forums, and the base was used to create the rules for our record setting article contest recently.
Yet, we still dissect the rules each year, tweak and adjust them, and add to them when needed. We are determined to continue setting the standards, content, discussion, and contests here at AoB and input from our readers guide us in these endeavors.
The issue at hand, bought up by some of these very readers, concerns what some see as a huge discrepancy in the scores of the judges. A few readers have expressed a concern that on some of the entries the judges’ scores vary greatly.
In this years’ contest, our main judges are from Italy, Australia, and America, while our tie breaking judges are from Germany and Japan. Our managing judge is from America as well. All of the judges are respected in the bonsai community and their names read like a “Who’s Who” of bonsai masters. Each have judged more shows and events than most of us have ever attended. (A complete list of the judges for this contest can be seen at http://artofbonsai.org/galleries/aobawards2008.php
It should be understood that each of these judges has individual styles and tastes, some lean more toward the traditional and others more toward the modern. Each has undergone different methods of training and development and each have been influenced in different ways throughout their career. Each comes from different cultures. In short, each judge is an individual with different expectations, tastes, and preferences.
This is exactly what we wanted for our contests, a wide range of knowledge from different cultures and backgrounds used to judge a wide range of entries created by artists from different cultures and backgrounds.
Considering the above, I would fully expect that some entries would receive scores from the judges that were widely varied. To confirm this, I looked at some of our previous contests such as the North American vs Europe Photo Contest , the AoB's First Annual Display Contest , the Bonsai Today / Art of Bonsai - Photo Contest , and even the article and editorial contests. I was not surprised to find some widely varied scores in these contests as well.
Thinking about this issue, I realized that I would be much more concerned if the judges scores were exactly the same or just off by a point on every single entry. Even the mainstream art world can not come to that tight of a consensus on art.
While I can’t pretend to understand why one judge would score a tree high, while another would score it low, I do know that these judges are all experienced, respected professionals and I trust that they scored according to their best evaluation of the tree.
I, like others, would be greatly interested in hearing comments on the trees, as I am sure such insight would be educational and inspirational. However, I ask for knowledge, not justification, I ask for education, not explanation.