Comments on the Continuing Turing Test


Score: 9

I generally looked for whole scene trends, continuation of dendritic shapes in neighboring classes, smoother non-dendritic boundaries, and a reasonable distribution of island pixels within larger patches. Clearly, I may just as well have flipped a coin.

I'm interested in this code. What's your plan for access?


Score: 14

I assumed that the real maps were generated with some human input. I looked for maps that had a lot of fine detail and assumed these were not real. Of course if the real maps had been generated by remote sensed data, this would have been the wrong approach.


Score: 10

Neat! I reasoned (unsuccessfully it appears) that the real landscape would have clusters of similar color in a more contiguous and less bifurcating pattern (i.e., solid chunks rather than branching networks). I thought this should be true especially for categories with relatively few cells. I also thought that landscapes that had relatively distinct linear breaks between categories were real.


Score: 11

Yes, I tried to identify landscapes, typically starting with water courses. The random color use made the task much more difficult than it should have been, and rendered some of the maps unintelligible, leading to essentially random guesses. Consistent use of meaningful color would have provided a more meaningful comparison.

I think this was a wast of time.


Score: 13

I tended to think that the real landscapes would be more clustered and have better defined regional pattern.

It was difficult to make the choice without knowing if I was looking at natural land use designations or human-dominated land cover types, since these patterns would have different characteristics. I read through the supporting material quickly, but I don't remember you mentioning this or not. If I had known that these were all natural areas, I would have selected a couple maps differently.

Otherwise, I think the maps look fairly realistic. In some pairs there seemed to be very little difference in the landscapes.

I have a need for generated landscapes for some work I'm doing with simulation models and ecological indicators. I'd be interested to hear of the final results and whether you will be making your realizer available.


Score: 11

Patterns that seemed to represnt what i see in nature (e.g., streams and associated vegetation, changes associated with elevational gradients).


Score: 10

I looked for dendritic looking patterns


Score: 13

smaller color masses, contiguous shapes


Score: 11

I tried to identify the map that showed what might be nature patterns. Since I didn't know how the original maps were generated I also looked for more maps that might have been subject to cartographic generalization.


Score: 11

Based decision on past experience with satellite imagery and air photos, along with landscape data sets in a gis.

Great job.


Score: 11

-looked for lines that were too straight
-looked for what looked like watershed patterns


Score: 14

I was looking to see how continous the color regions were. If the regions were very "spotty" I figured that was the generated one. It seemed to me that real landscapes are much smoother.


Score: 13

I assumed the white lines were river systems and looked at them to see if there were colour shadings that could indicate towns or vegetation.

Apart from that, guesswork was a serious consideration! I did think that the maps I tagged as "real" generally were less diffuse - fewer jagged edges and scatter pixels.


Score: 11

I tried to determine the real landscape by finding consistency of relationship of patches to the most fractal features such as drainage patterns. I also put preference on maps showing consistency with landscape folding, such as the ridge and valley province in E. Tennessee.


Score: 12

"symmetry" of gross patterns
irregular edges
abrupt transitions

COMMENT -- I'd like to how well the visualizer could make "null" landscapes where color is a true indicator.


Score: 6

Comment: I (nearly) systematically made wrong selections; the proporties that I supposed to be typical ones of fractal imagines were those of the real pictures.


Score: 16

I generally looked for more detail. Seeing tons satellite data to atleast .50 km resolution, and other satellite data I cannot talk about, it made it easier.

Nice job...some of those would have been just as good as the real images.


Score: 13

Look for self-replicated patterns


Score: 9

I, having not alot of experience identifying patterns, found myself guessing quite often. Question I thought to myself to help distinquish different patterns were; What do I think the topology looks like? What is the scale of the given area? what are the pixel sizes? What is being measured? (tree stands, soils, urban disturbance, etc..)


Score: 12

Impact! Gestalt! Try not to interpret any pattern or color.


Score: 10

dendritic patterns
associations of one color with another in some sort of regular sequence rythm


Score: 11

If I didn't need to guess (which I had to do a couple of times), I often based my selection on what I thought were probably streambeds (based on the shape). I compared the reasons around them and (still) made a "fuzzy" selection of what I thought would be... "more natural" in those area's.

Another comment: you did use some maps more than once in the test, didn't you? Perhaps a mirror image, or rotated image. And of course with the other map generated in real time, but the "real" one was the same (or derived) from the same area. Right...?


Score: 14

Looked for small regular patterns of individual pixels.


Score: 16

I saw that the maps were sometimes repeated, and so wondered how they could have been generated in real time. Also, both maps sometimes contained the exact same patterns for most of the colors. Again, if generated in real time, how do they duplicate the real maps?


Score: 6

I tried to pick maps which I thought were more random, which apparently were the fractal maps, not the real ones? I think I'm really biased towards vegetation maps and kept thinking they were veg maps. Therefore, when I saw large, neat blobs of cover I thought they were artificial and must have been generated by the computer. Apparently, I had it backwards! Maybe if I did it again, I might know what to look for and reverse my score...14 right out of 20.


Score: 5

Basically landscape pattern, repeated relationships, what I (obviously mis-)interpreted as anthropogenically influenced areas, apparent riparian/topography relationships, and amount of detail (especially repeated detail along ecotones between types). The Fractal Realizer created landscapes that seemed logically assembled. I can see some real potential for using tools like this to compare historic, extant, and potential landscapes using some of today's spatial statistics and modelling capabilities.

Neat stuff.


Score: 12

I looked for too much "feathering", too little "feathering". Mostly I just guessed by "eyeing" the maps.


Score: 8

It was difficult to distinguish without an idea of scale.


Score: 11

Perceived reality of the given patterns (i.e. patchiness, structure of edges, etc.)

Very interesting.


Score: 6

The best clues are probable riparian areas. I attempted to ascertain whether or not the relationships between obvious riparian/mesic stringers and other classes was or was not consistent. It also might have helped to know where (e.g. eastern versus western U.S. or mesic vs. xeric ecosystems) the maps were (but I presume that was coveted for purposes of the test).

Average loading times for subsequent images was around 40 seconds. Therefore, approximately 13 minutes of the test was spent simply waiting. I use a 10 base T internet connectino and therefore data transfer was not the problem. I think you should let people know that taking this test will take considerably longer than the 8 minutes specified. I believe it will take a minimumo of 20 minutes for most people. It took me approximately 0.5 hours.


Score: 8

Degree of fragmentation and pathciness


Score: 14

I first looked at the whole figure - but usually saw no intuitive differences. Then I looked at the edges between habitats - how crooked/smooth they were. Then I looked at how the different habitat types related to each other - e.g., if a finger of habitat was surrounded by another block of similar shape.


Score: 6

I tried to distinguish drainage patterns and also tended to assume that scenes with many "small" inclusions (i.e., 1 pixel) were probably not realistic. Not being aware of the nature of the map and also the scale may have hindered my guesses somewhat. Basically, it sure fooled me!


Score: 15

Usually some sense of landforms were given, and landcover types followed those, and that's what keyed me in mostly. Sometimes I chose based on the edges of the cover types - if they were too feathered, or too smooth. Is there an order with which the images are drawn? I would like to take the test again and always pick the first image drawn to see if that's so!

Very interesting. I work closely with Robert Smith here at the JW Jones Ecol Res Ctr, and he told me to check out your web site. We hope to create a multivariate model which is based on landscape indices and bobwhite covey densities to help design better landscape units for quail management.


Score: 13

I looked for riparian/watershed-type features, or evidence of ridegelines, with the corresponding change in features. Not knowing which was which in the pairs, I can't really provide any other comments. Very interesting, though.


Score: 13

I picked the one that displayed first each time! Guess either my eyes are bad, or you guys randomized!


Score: 17

Knowing that maps typically lump rather than split, I looked for contiguous spatial patterns. The maps with more random cell patterns seemed to be "realized."

20 maps seemed like too many. It got a bit tiring.


Score: 13

I tried to get an overall sense of the detail in the drawings assuming that the pictures that registered less detail were real. My assumption was that a real image would be composed from more human error leading to image softening with regard to detail.


Score: 12

Looked for patterns of clusters. I assumed that the data was veg or landform coverages and was morte clustered.


Score: 10

Drainage networks


Score: 15

based on connectivity of color types; looking for patterns that might be associated with contours of elevation, river courses, or coastlines


Score: 12

the color patterns


Score: 12

1. Multidimensional pattern similarity e.g. evidence of fractal pattern but punctuated at certain scales by change in pattern.

2. Coherence with a visual image from experience


Score: 9

I asked the little man in my head.

I like biscuits.


Score: 9

I am not a professional map person, but happened upon this site because I subscribe to an ArcView list. I hope my curiosity didn't mess up your data. (My choice of 5 on the realism of the simulations was a compromise...I don't have the experience to provide a valid response.)


Score: 17

I placed more emphasis on straight lines and drainage patterns when comparing the two images. I thought that the fractal image may include fewer straight lines and more speckeled habitat patches near edges.


Score: 8

I tried to identify "unnatural" looking patterns. From my results, I was pretty unsuccessful.


Score: 12

I know very little about map reading, but I made 60% correct. That's probably pretty close to a flip of the coin! I tried to visualize connectiveity between habitat features to make my decisions.


Score: 16

gestalt


Score: 15

I looked at the connectivity of patches of color. I chose those where there was some continutity of patches over those with "frayed" edges.


Score: 14

I looked for patterns that appeared "squiggley" in the fractal generated maps - something you wouldn't expect in real maps


Score: 11

Probably looked for more solid outlines to patches - was aware in most cases the choice was random


Score: 13

I was looking for some patterns in nature like aluvial residues, possible vegetation patterns or for some typical fractal patterns (Mandelbrot stets)

Very interesting test.


Score: 8

At first,I thought it's stereoscopic, so I tried watching as three dimensional maps.

I could do it.

It was interesting!


Score: 17

By basic landform shapes (streams, lakes, coastlines, geologic-formation-shapes' realism).

This was difficult test, and my pick confidence was VERY low. To me it was easily conceivable that any of the patterns could be real, depending upon the scale of the map and the features being plotted. I guess I'd question the validity of the comparisons since, as I said, ANY of the patterns could be "real." Ma Nature can, and does, really scramble things up in some areas, e.g. geology (and, thus, soils and vegetation, possibly) in Canadian Shield, western U.S. coast ranges.... fun test, tho.

Good luck


Score: 13

I felt like this test was not a good test. Mainly because it semed like both options were artificial and lacked context. I think that the earth tone colors used sometimes created information problems. I would realize that information was present that I did not see at first because it was too subtle of a difference. My suggestion is to try the test again only use the colors from the original map.

Inability to distinquish between real and fractal images in this test does not seem to be based on the features that would be due to the quality of information present but more to test parameters like colors chosen and the level of definition chosen. I could be wrong.

After about the third or fourth map, I developed a strategy of looking for complexity, scattering of data (fractal bias), and how this fit into making a meaningful image.

But, meaningfulness was pretty impossible to judge since the scale of the map and amount used made context impossible.

Sometimes, I felt like I could tell that one map was more "meaningful" than another. Most of the times, complexity was used for my first guess at "real" unless it was too scattered.

However, towards the end of the test, I realized that different parts of the image might fit into my concept of real while the other part was fractal (in my mind). So, I knew my criteria were pretty much meaningless.

Basically, I would say that I felt like I had no criteria to use to make a decision that were useful. So, I would expect that few people would be able to tell which one of these maps were "real". I don't think this is because the fractal images look real, but because the test used destroyed any criteria for making a choice between the options.

Hope that helps


Score: 19

I tried to identify which map was more 'grainy', and hence decided this must be the fractal realizer map. The maps using more contrasting colors were easier to identify then those using similar colors.


Score: 12

There were obvious similarities between the two landscapes, such as identical shapes, leading one to suspect the "simulation" is not at all random. This realization made the rest of the test somewhat unsatisfactory


Score: 11

I tried to pick up drainage patterns and avoid too much speckling patterns in the landscape.


Score: 14

generally looked for sharpness along defined map edges. Real maps showed a more definitive line than the fractal images.


Score: 12

tried to use fineness of classification as "real" landscape


Score: 20

I had no real idea which map was real and which map. I looked for the more chaotic map, and chose against it. Thereal maps I assumed were less "pixelated." Fractals are self replicating on an infinite level, so looking for the repetition may have also helped.

I would like to note that this test was very difficult. Basically, these didn't even look like maps to me. Are you sure your program is working right, because it said that I got 20 out of 20 and my answers were no more than educated guesses. I hope these results are helpful to you. If there is anything else I can do, please email me and I would be happy to help you in anyway I can.


Score: 9

looked for consistency in changes between colors and regions


Score: 11

It seems that some features were copied exactly?


Score: 12

maps were identified using gut feeling. Tried to look out for "classic" mandelbrot gingerbread men.

It would be interesting to see if we would be fooled by a map of larger extent.


Score: 14

Mainly by the association between categories. I.E. One colour will often be Associated with any other, if category is a adjacent to many categories on one map and adjacent to fewer on the another map, then it is likely to be the latter map that is the real one.


Score: 12

Homogeneite of pattern


Score: 7

Without a scale, I guess I was trying to pick the more fragmented landscapes as real.


Score: 12

By trying to find a logic order in the placement of the coloured areas.


Score: 14

the shape of the shapes and surrounding shapes (did they make sense in a landscape?);
sometimes probably an aesthetic preference;
I didn't use the length of time for the image to completely appear on the screen.


Score: 13

Without knowing what anything represented, it was difficult to make decisions. Something to ask or to think about would be to determine the test takers right or left handedness, since the maps were right or left, maybe try up and down?


Score: 8

Primarily on hydrologic features, or at least what I thought were hydrologic features.


Score: 13

Looked for linear features and smoothness of the patterns


Score: 9

I would guess that if you told the testers before the test what was being mapped, the number of people identifying the "real" map would increase. Patters of vegitation growth are quite differnt from dessert areas to wooded areas, etc.


Score: 11

Using stream or drainage patterns


Score: 11

Pretty hard to tell without some context to evaluate from. Some were quite obvious due to pixel forms I am familiar with but most were quite difficult. Different!, but difficult. Is the goal to make them look the same?, because they don't.


Score: 12

By the pattern.

Very interesting test. The generated maps are quite good.


Score: 15

Complexity of landscape, when it appeared "too simplistic" I took it to be artificial since natural landscapes are highly complex. Although I have some reservations mainly because people "do" rely heavily on empirical evidence in their judgements, this is a very interesting experiment and an excellent use of the web


Score: 13

linearity of feature - ie. "drainage patterns"


Score: 11

As the test progressed I began to see differences in a dentritic quality to diffent images and started to base selection on these - the only trouble was deciding which would be real - my thought was it would be scale based. At small scales it would seem that real l landscapes would be dentritic that fractal - at large scales maybe the fractals would be more dentritic.


Score: 5

I looked for landscapes that had more single or few-pixel clusters. Obviously my rule was not a good one.


Score: 10

I tried to sense which patterns were 'natural' and which were more random in nature


Score: 9

Mostly I just guessed. The simulated landscapes seemed very real and this made it very difficult to determine the real landscape.


Score: 9

I looked for gradients between different "classes". I also looked for overly complex ecotones.


Score: 19

Learning definitely has something to do with it. While I recoginized a couple of images, the majority I picked based on what experience and learning the discussion imparted about dentric networks, strongly trending patterns, fractal complexity, and topography constrained effects.


Score: 9

Since I did not choose well, my method of using drainage-like features was not good.


Score: 12

I thought about the cartographer and his generalization problems. So, if it was really fuzzy, or had lots of tendrils and fancy stuff I guessed it was computer generated. Classified remotely sensed image could have that complexity but I wouldn't call them maps. I also looked for reasonable spatial associations and for ones that didn't make some sore of "sense". It's a squidgy thing. But I don't think I did that well anyway. I didn't like the fact that so many things were identical between the maps. E.G. the white pixels were always identical (seeds?) and often there were identicaly patterns from one image to the next. It was interesting and fun and I will tell my friends and colleaques about this.


Score: 11

I had trouble with the yellows - I thought they looked different in your test grid but could detect no diff on maps! Sorry!


Score: 8

I tried to distinguish "realistic" clustering around the preserved (white) areas. Great maps! How's it work?


Score: 9

I looked at what I assumed to be drainage patterns, and also level of fragmentation of the various classes.


Score: 19

For the real landscapes, I looked for connectivity of the colors (especially in terms of what I thought were streams/rivers) and the smoothness of the contact between the colors. I thought concentricity of colors could be useful, because I am most used to using geologic maps, but I don't think I was able to use this much. You have a good idea about presenting your maps more than once as rotated images. I eliminated maps that showed lots of 'island' pixels, believing that they represent artifacts in the conversion process of generating a map from an image.


Score: 13

Primarily hydrology. Difficulties were in being unable to deternime the theme, i.e. soils vs. geology vs. veg.


Score: 12

I looked for most natural, least fractalish

1) Could use more contrast in the colors(lots of eye strain)
2) I feel I would do better with less pixelation, surely you could fit smooth lines to the landscapes?

Thanks, I enjoyed it.


Score: 10

I identified colors that were the same between pairs, figuring that they were actual features on a landscape; based on the patterns of what I thought was real, and using a bit of intuition about how land patterns relate, I chose what looked to be real. I also tried to avoid landscapes that looked overly fractal, but apparently my strategies didn't work that well.

Good job!


Score: 6

detail, logical placement of patterns


Score: 10

Two comments:

  1. I tried to distinguish based on these critieria, I think. First, more precision = real. Second, are there patterns like river valleys that are consistent = real. Third, are there geometric shapes (straight sides greater than an apparent pixel) = not real. Fourth, was there an overall pattern reflecting an underlying geomorphological pattern = real.
  2. I am quite deficient in my color perception. How many others might be similar? Need a question or test about that.
  3. I see some parallel logic with the Turing test, but I think you are stretching this to call it such. There is no artificial intelligence, by most definitions, being tested. One is only distinguishing between patterns, one computer generated.

Good luck with your project. Will you provide participants notice of results in some fashion?


Score: 14

I'm not sure how I picked what I thought which were real ... instinct? Most of the time it was very hard to choose. Occasionally one looked very fake and so I chose the other one.

I'm not as familiar with the topography of humid climates as I am with arid ones. I might have had a better idea of which was real if it were from an arid climate.


Score: 12

Number of floating pixels, consistency of adjacency of different landscape elements

I was actually expecting "simulated" landscapes to be truly simulated - instead of using the basic template of the "real" landscapes. Since there is always some abstraction between data and presentation of data, then both graphs can be considered simultaneously "real" and "simulated".

I think the true Turing test will come when you are completely simulating landscapes (though perhaps keeping the fractal dimension, other spatial parameters, number of colors, number of pixels of each color, etc. the same)


Score: 8

There were too many identical patterns in both the real and simulated maps. It was difficult to try to distinguish between them when so much (usually one or two colors) was the same. Are your simulations real multifractals or are they formulated from the clipped map?


Score: 10

Continuity of drainages, orderly progression of colors away from different drainages, "reasonableness" of what I thought were topographic elements. I am a geologist, so I looked for what I thought were geographic and geologic elements.

Nice Job!


Score: 8

largely looking for aquatic patterns


Score: 13

I looked primarily at the stream channel patterns.


Score: 13

watershed shapes and transitions between classes


Score: 13

large jumps in contour levels- fractal images seemed to have more detail in areas


Score: 7

I tried to distinguish the landscape by looking for features such as river valleys or linear breaks in soil type. I don't think it would add too much bias if you told the tester what "type" of landscape they were looking at and what specifically was being maped (topography, vegetation) I think without that info, the test is truly impossible. , vegetation). I think whithout that info, the test is impossible because you have no idea what you are looking at.


Score: 10

Focused on riverine-type features and land adjacent to them, as well as general interspersion of colors - assuming a natural landscape to likely have more fine-scale mixing.


Score: 13

I felt that the more "fragmented" map, or one with more jagged boundaries (edges) was more likely to be the artificial one. On my browser, one of the maps always showed up faster than the other, and the time difference may have subconsciously influenced my decision. It seems to me now that I tended to pick the map that appeared first! I know this is a rather arbitrary reason - but how long does the fractal realizer take to generate its map?


Score: 11

Scattered design


Score: 7

well being resourseful as one might be, i created a bitmap from the 9 squares 1 border color guide you presented me at the start of the test i borught it up through paint shop and tried to match the map colours with the colors from the color chart, however, i did not succeed, as my score reflects my choices. your use of creating realistic fractel maps is great and near perfection. good luck in your indeavors, thanks for the opportunity to take your test.


Score: 12

I found it fairly difficult to distinguish the landscapes, and used mostly a gestalt approach.


Score: 10

I really didn't have a way to tell, which is why I was only at 50%.


Score: 15

Landscape structures, relationships of classes, linear patterns, cohesiveness of patches

This is a very interesting experiment, but what you call a real map looks like a classified satellite image to me.

Images are not really maps, vast literature on this....


Score: 10

Complexity and pattern; i.e., I looked for patterns that appeared realistic for a landscape (e.g., fingers that looked like riparian corridors).


Score: 8

Relationships between the dendritic, riverine shapes


Score: 17

The amount of pixellation on the edges was a clue, as well as the matching to hydrology (i.e., if the landscape has less relationship to the hydrology (white holes), I guessed it was false). Also just shape and complexity.

It'd be entertaining to see them again knowing - I imagine that the "look" is identifiable, though, and would bias results more as people tried again with better info than just how many were right/wrong.

It might be a little more helpful if you started with a couple of actuals, then did teh test, to put people on a more even starting plane, to get used to what part of the landscape you are looking at.


Score: 10

- details
- scattering


Score: 16

This is a fascinating test. I used a couple of methods to distinguish the real ones:

  1. I look at the interface between zones. Some of them look really "fractally".
  2. I look for heterogeneity. Real maps have many areas, shapes and lines that resemble fractals, but usually also have a few features here and there that are large and smooth..
  3. The fractal maps tend to be more homogeneous: if there small, fragmented zones, then they the whole map tends to look like that. If a real map has small fragmented zones, it will often not look like that everywhere.


Score: 13

I'm not too sure what use this "Turing test" has. Without knowing what type of landscape, scale, etc., I found it difficult to know what to expect--i.e., is the system supposed to be reflective of a totally natural system, or one that is influenced by humans in one way or another. I didn't know whether I should be looking for too much homogeneity or too much interspersion. I guess I was looking for a little "context" in order to better judge what was live and what was Memorex.

It also seems to me that just because one of the pair was totally ficticious and computer generated, doesn't mean that it isn't completely identical to a natural landscape found elsewhere. Or maybe this is getting into the "1,000 monkeys on 1,000 typewriters for 1,000 days" argument...

I also found it somewhat unnerving (sp?) that certain colors (e.g., white) were identical from one map to the other in the pair.


Score: 8

aggregation patterns for classes and 'blockiness'

I suppose that if one knew the labels of the classes would be able to tell the real landscapes better. e.g. when the contiguity or interspertion of two classes is not very probable.


Score: 11

Fractals are usually more "jagged", have rougher edges actual landscapes. In some of the pairs one seemed too jagged. But in most it was very difficult to distinguish.


Score: 13

I work in forestry/GIS and used my experience with vegetation patterns to help pick the real map (size and distribution of patches).


Score: 11

looked for things like overly random scenes; too much linearity or border effects; tried to distinguish what looked to be "natural" drainages and the way they interfaced to adjacent polygons.


Score: 15

I guessed that the "fuzzier" images were artificial. I guessed that noticeably straight edges indicated an actual landscape.


Score: 4

After a few pairs, I tended to look for overdependence of the fractal model on elevation change and variation within more globularly shaped regions. For the elevation over dependence, I had to assume that elevation rose as one would head away from the white areas. If the white areas were timberlines, then I did exactly the opposite of what I intended to do. After a while, I tended too much to look for variation and jagged borders rather than looking for continuous areas. It would have helped to this was ecoregions, rather than land use or single species even.


Score: 10

Apparent organisation, recognisable structures, contrasting patch structures (which I thought the fractal realisations might not reproduce)


Score: 15

i tried to visualize which patterns could not be made by nature.


Score: 10

Pattern recognition. Adjacent coloring and similarity to fractal patterns I have seen over the past few years. Some fractal landscape imagers I have seen tend to, in some areas of a given image, produce a "houndstooth" effect wherein the color distribution is fairly even and "checkered". It seems to happen in areas that represent lower and more even elevations similar to a marshy type area on a real map but the patterning is too "uniform" somehow.

Its been a very interesting test.


Score: 10

BASED ON THEIR COMPLEXITY. COMPLEX MAPS SEEMED UNREAL


Score: 17

Very impressive! My friend took this test and told me to take it too. He got a perfect 20/20. I suppose 17/20 isn't too bad. I have very little experience with fractals but have seen a few of them before. I just tried to imagine each image as an actual map... the one that fit the bestin my mind was the one I chose. I don't mind my name being known... :)


Score: 15

by the smoothness of the linear features
by the chaotic nature of the patterns


Score: 9

Realistic spatial patterns, Neighbouring colors, guessing


Score: 9

the consistency of patterns in a map related to the other patterns present and to the pixel size in a map: many loose (pixel size) small areas I took for being false


Score: 8

Look for 'un-natural' patterns


Score: 9

Patterns, distribution, linear trends, color juxtaposition


Score: 16

used patterns that indicated a gradual shift in land type as opposed to immediate change


Score: 4

I looked for images that showed more fractal patterns, not images with large areas of the same color. I guess that method identifies the artifical images.


Score: 14

I assume that when a map is made and classified a lot of smoothing goes on. I looked at the images and looked for noise, ie a lot of singular pixels. The maps were pretty good, and I was mostly using the WAG method (Wild Ass Guessing).


Score: 15

i have a feeling that my "rules" were unintentionally reversed for the last half. I may have been looking for streambed relief, or parallel valleys that sometimes leave nice neat signatures. Frankly, I was guessing.


Score: 17

The interface between adjacent colored areas appeared very irregular. In other words, there was not a smooth transition between adjacent colors. The fractal images had a pixelized appearance, especially at boundaries. One main characteristic of the fractal landscapes seemed to be areas of very few colored pixels that were disconnected from a larger mass of pixels of the same color. I primarily went on instinct as to which image appeared more natural.


Score: 8

It was hard not to pick the first one drawn. I think my picks were biased in this regard


Score: 10

I tried to wait until both were displayed before looking. For the first few, I looked closely and tried to come up with things that seemed odd in one, then selected the other. In the later ones, I just went on initial impression. If I was to do it over again, I'd use one method one time and the other a second time and compare my results.


Score: 12

Some one with no industry knowledge, most of the time I based my decision on how fast the image was down loaded. If the image is generated real time, then there is a flaw in your testing method. Since I got 10% more that statical average, there might me some truth in my hypotheses.


Score: 8

By seeing if they looked like a process (that I'm familiar with) generated the pattern. Also, by comparing the landscape with maps I am familiar with.


Score: 13

mostly, I guessed


Score: 12

I took this using a 15" SVGA screen. Would you expect screen size and resolution to impact individual results or is it strictly a macro-level pattern recognition exercise. I thought about this as I was taking it, but never really came to a conclusion. I have a higher resolution screen on my PC at home, but taking it again would probably give a biased result. That is, if screen size and resolution do make a difference.


Score: 13

I have a geological engineering and earth sciences back ground, but I don't spend much time looking at maps. I looked for continuity of patterns, or tried to imagine a geologic setting that would account for abrupt changes. I looked for river patterns that looked real. Generally, however, for the most part it was a coin toss since I could think of reasons why each side could be the real thing.

(I should add that my screen only was showing 9 of the 10 colours.)


Score: 17

looked at the smoothness of patterns, pixelization and complexity of the delineations. Also kept in mind the tendency of a photointepreter and/or cartographer to genralize and group similar features.


Score: 10

by the amount of co-mingling of colors and shapes that might indicate features


Score: 11

I tried to find features that seemed to have drainage, elevation,slope, and aspect characteristics. These factors are so important to so many spatial phenomena I felt that they were bound to show up in the real landscapes.


Score: 16

There was something about crispness of polygonal areas. The maps that seemed to have fewer scatterd pixels looked better to me. There were a number of maps which I thought were real that also had a lot of scattered pixels but there was just something in the way they were scattered which looked more real If the scattered pixels seemed to indicate a transitional area rather than just scattered pixels. There was just an over all more crispness to what I thought was real.


Score: 9

It was difficult without information re: scale. I tried to select based on patch shape and degree of patchiness within landscape.


Score: 9

I tried to find patterns where colors "looked out of place" in there surroundings. For example; specific types of geology are commonly found near specific other types. I looked for patterns like this. I looked for differences in "color clustering" and randomness.

I was impressed. I have used several different programs to simulate elevation data and can usually pick those out from real elevation data. I was impressed by this programs ability to make it so difficult.


Score: 13

I looked for patterns that seemed artificial, i.e. a few scattered points where there were no others like them. I also considered landscapes that had more large contiguous color blocks to be more "natural".

You have an interesting experiment here. Thanks for setting it up. I am a biologist interested in the fractal nature of small mammal burrows.


Score: 15

The fractal nature of the artifical ecoregion EDGES is the biggest clue


Score: 10

I guessed that the realizor wouldn't make really straight edges, and that the "snowy" patterns weren't real.

How does it do on other landscapes, especially ones more dominated by anthropogenic effeects?


Score: 7

Man-oh-man, that was tough. I couldn't have done worse if I tried! With no information on scale or relative topography, I was working entirely on "gestalt". I didn't spend time trying to predict relative elevations by comparing colors near "low-lying" areas (those next to apparent rivers) to those relatively far away from those areas. Of course, maybe the fractal landscape realizer is just really great, and my score will be typical. Can't wait to hear the results!!


Score: 7

I don't know much about fractal landscape characterizations, so I wasn't sure what to look for...


Score: 11

Tried to look for directional trends consistent among different colors, consistency of treatment of "rivers" and "drainages", large areas free of internal squiggles (I figured a fractal program is always tempted to put *some* structure into an area, so a huge patch of consistent brown suggested to me the map was real -- ie, that the real map was less "fractal looking").


Score: 12

there seemed to be a pattern based on the degree of detail


Score: 11

Tried to find Fractals by looking for 'ice crystals' in the differing parts of the landscapes. Any straight lines in the images tipped the scale towards 'real'.


Score: 10

Look for natural appearing patterns and associations to adjacent patches


Score: 17

In most cases, the real landscape was distinguishable due to excessive edge or unlikely-seeming 'flows' of covertypes in the fakes. In a couple instances, the relative overall shapes and orientations of distinct covertypes did not seem likely; i.e. an elongated type running N-S which then intersects with an elongated cover type running E-W.

I am very interested in both GIS and fractals, and am planning a project, which could turn into a thesis, on the statistical relationships present in the Mandelbrot (u) Set with real-world phenomena. Keep up the good work!


Score: 12

I don't know exactly ... mostly I went with the one that "felt" correct.


Score: 13

I sort of looked for 'fractal-like' patterns.


Score: 13

I tried to "see" what the map represented and think about how areas, or units of information are related to each other. I looked for geographic shapes that would lead to resulting elevation, foliage or hydration.


Score: 10

I scored about what I thought I would (50%), as most of the time I was just making blind guesses, with no real clue as to what I was looking at. The way the test is set up, removing all referents as to what the maps represent, would reduce anyone without special experience in the field to mostly picking at random. So I guess that at this rarher high level of abstraction, the Fractal Realizer does what it's intended to!


Score: 10

basically by guesswork. i attempted this earlier today, with a 50% success rate, but the machine crashed before i could post the result. (so i may have been operating on memory this time around).


Score: 10

I tried to make the distinction by assuming the generation algorithm should do some kind of smoothing, i.e. the real maps should be somehow more pixellated and there could be more stronger conntrast between the map elements. I quess this did not work.


Score: 16

strait lines vs mandelbrot look alikes


Score: 16

  1. I identified linear features that made sense as landforms and thought those should be real
  2. I identified logical dendritic drainage patterns and associated those with the real landscape
  3. I identified coherent patches of classes, thinking that they would be larger and more regular on the real map based on both generalization and scale
  4. I immediately threw out maps that looked too random as synthetic


Score: 12

I took real landscapes as having slightly more contagion visually. I also looked for non-self similarity as a cue for real.

Note: I could only distinguish 9 colors in your prelimanary color map, no matter if I shut down other applications and no matter what version of netscape I used. I really wanted to take it anyway.

I think your mean score of 11-12, which includes my score, is close to expectation (50%) if one of two maps are chosen at random.


Score: 11

I tried to distinguish likely patterns of colors based on experience with landforms, ie. contained regions were suspicious, sharp interfaces between colors over a long, straight interval, ets.


Score: 9

natural-looking noise and landscape stringer patterns.


Score: 14

This turing test is very interesting. Here is some comments/questions...

1) How do you determine which side will be the fractal image? If it half and half, isn'it any problems in the interpretation?

2) It should be preferable to announce before the turing test what kind of map we will interprete (maybe I read to fast the introduction...)

3) I distinguished the landscape by their contours. Fractal images being more definite and more replicable in terme of form and patterns. In the real maps, some patterns were definitely not fractals, in what I understand a fractal is.


Score: 10

I looked for things that looked like fractals.


Score: 13

By comparing the shapes and the sequences of the patches of different colors.


Score: 10

Realistic look drainages? Looking natural looking breaks between units The real test of course is the corelation with actual. I'm interested in your model. As a PhD student and interest in modeling soils and landscapes I could use additional information.


Score: 14

continuity of colours, increased number of single pixels increased the likelihood that I would select that maps as real.


Score: 11

Primarily by the amount of mixing of different data within the map. My theory was that the computer map may produce a more homogeneous landscape effect.


Score: 12

Continuity of areas to select the correct maps; irregularities to rule out the bad


Score: 16

From patterns of the colors. Knowledge of soil landscape patterns and landform development and drainage patterns in various kinds of bedrock.

I think I tried to second guess myself on a couple of the paired images. I wasn't too familiar with some of the drainage/geologic structure patterns. (ie)- I guessed!


Score: 9

I tried to distinguish based on complexity of the image.


Score: 9

I guessed. Four colors and 1 km resolution in a box 50 km/side is darned difficult. I think it will skew your results favoring indistinguishability to an unwarranted degree. For example, I could give a single pixel of a scanned image vs a random pixel and show that my pixel generator was just as good a scanned map.

Let me know when you go for (say) 1028 X 1028 and 64 colors.


Score: 12

I wondered whether neighbouring colours could only ever be the same. I was unsure whether the colours were showing height, but this did not seem to be the case because neightbouring colours did always seem to be the same.


Score: 12

I would have found it much easier to try to assess the landscapes if I had known they were ecoregions or similar before I started the test. I assumed some similarity to elevation (contouring) but new that in most cases neither map was particularly evocative of elevation. I felt a little more information before starting would have helped. The general appearance of paired maps is remarkably similar.


Score: 13

General pattern. Tended to assume overly fine detail was probably generated rather than real.


Score: 15

It was not easy to to develop a clear set of rules for discerning the actual landscape from its fractal counter-part. So, I relied somewhat on 'gut-instinct' and a little more on education and experience. I think that first impressions are important in terms of what looks realistic. A measure of realism I tried to develop had to do with the level of aggregation of patterns (or ecoregion types?). There are some discernable patterns in nature, like land cover that follows a river or drainage. However, I also looked for the patterns to 'break up' or become randomized at the edges. I wasn't sure that the fractalized landscapes were built to handle the latter condition. Overall, a very interesting test.


Score: 13

Gut feeling, which is why it was easy. Of course, lacking further information, the tendency is to interpret the colors as one would for familiar maps such as topography. Such maps have familiar assumptions such as color gradients to represent altitude. Simply by following or not following such conventions, one could produce maps that looked "realistic" or not, based on a real landscape. Perhaps a test in which we knew what was being represented would be more interesting, i.e. are the colors topography, vegetation, rainfall, soil nitrogen, some combination of several variables .... ??


Score: 9

Tried to compare to known erosion and formation features.


Score: 13

I used what little I know of iterated fractal structures, topology, and figuring out which color blocks were identical on the two maps. That was fun.

Nice job, Bill.


Score: 9

I assumed certain patterns would not exist in nature. I guess I was wrong!


Score: 7

I looked for rough limits between colors, thinking that reality is more uneven.


Score: 7

I thought about how land patterns look on topographical maps I have seen and looked for the map that reminded me more of such maps. This was a lot of fun, but it would be even better if I was told after each map pair whether I was right or wrong.


Score: 16

At first I couldn't find any specific/repeated characteristica, that could be used to distinguish the landscapes, but then I discovered a kind of pattern: For nearly every pair there is one picture with a sicnificant higer number of "stand-alone" dots/pixels, than the other. Which one is then real ? - well I do not know for sure, but based on my one expirience with fractals vs. the real world, I used the following theory: Computer generated fractals are truely and equally fractal/fracmented on all scales, whereas the real world a lot more complex - the amount of fracments depends on the scale, landscapes thend to smoothen out a bit due to the erosion - which will create fewer "stand-alone" dots/pixels.

Maybee this theory of mine is just to far out and my hit-rate of 80% was pure luck. Therefor I will put my theory to the test once again - right away !

I'll be back with the result no matter what happens (I can hardly wait). the other


Score: 10

I looked for boundaries that tracked each other, not simply isolated clumps that popped up in the middle of solid regions. Also, 'sprinkles' and disconnectedness seemed less likely to reflect natural structures. After the 3rd pair, it was clear that both maps shared one or two colorings in common. I was never certain what I was looking at! Drainage basins and mountain spines look very similar to me. Only 50%, I may well have done better with a coin flip!


Score: 10

I tried to look for 'nonsense' patterns, overly complex, or overly simple patterns. Weighing this against the common parts of each, I tried to determine the fractal version.


Score: 14

What I did was trying to recognize patterns like riverbeds, or fluvial sediments. That wasn't very successful, I fear.

But... IMHO, you guys really ougt to make a sharp distinction between REAL MAPS and MAP-LIKE IMAGES OF ASPECTS/THEMES OF THE REAL WORLD. All images (that you call 'maps') were real, but 50% of them did not refer to geodata measured in the real world.

And what about the term 'landscape' ? I was trained as a landscape architect and to me, a landscape is a 3D thing in time and space, not a rather coarse pattern of little squares in 9 colours.

What you show are simulated data visulizations with poor mapping quality. What you test is if we, the audience, can distinguish between a perfect simulation of visualized field-data and visualized field-data.

That's my comment. Now, I'm curious to see what my score is. Regards


Score: 15

Sometimes the false map was too obviously "fractal". That is, some of the boundary regions were more mixed than I would anticipate in a real model.


Score: 10

Through my work in Remote Sensing and GIS I have found that even in nature there is linearity relationships between features. I.e. a river will flow off a hill between 2 ridges; along a coastline there is usually a gradation away from the coast to inland. In short there always seems to be a relationship between geographic features.


Score: 14

I tried to chose the map with least fragmented (small isolated) areas.


Score: 12

Compared the areas that showed significant differences and looked for Mandelbrot type artificacts.


Score: 10

In each case they both looked completely artificial. I guess I'm not as good at this stuff as I thought.


Score: 13

I tried looking for long, sinuous windy parts which are very highly subdivided. I think of these as examples where fractals go on and on beyond what we usually deal with. Your control of the fractal convolution number was pretty well controlled... so that wasn't too helpful. Next I tried using "realistic" landforms like streams coming off mountains or geological processes. But these weren't too successful either. These were very convincing simulations.


Score: 8

I first looked for smoothness of shapes: I would pick the pattern that looked smoother, and less broken-up. I would also try to envision a drainage network, and pick the map that seemed to be the most realistic topographically


Score: 8

The way that I tried to distinguish the maps, was to trying and see which map was more complex by layering and small patches.


Score: 10

thought of the maps as representing surface features such as veg types or habitats, urbanized areas, river/streams. looked for familiar patterns to = "real" patterns. much harder than I thought it would be!


Score: 19

Small spots of isolated color were either more prevalent or were over-dispersed on the maps I identified as artificial. It would be interesting to review the pair I got wrong: there was one I knew I was essentially guessing at, but I cannot remember which one it was.


Score: 12

Tried to indentify colors indicating highly dissected areas. The more detail in hydrology features, I thought, might indicated the real thing.

Also looked at changes in color patterns-the more gradual or subtle changes might reflect a truer landscape (but it wasn't easy to tell the difference in most cases of highly dissected areas).

Some images I just plain guessed at.

This is a very interesting test. I am quite suprised how realistic the fractal images were...which ever ones they were!

Good Luck.


Score: 6

It is a problem to identify the "real" type of landscape if I don't know how that landscape is defined ecoregion or land cover, geology or elevation. My "analysis" was e.g. based on an assumption of land cover map ! Another important part for distinguishing will be the spatial resolution Both aspects have a high influence on what kind of frequencies you accept as natural in a landscape map. So in order to test your realizer I think you should put your info regarding resolution and definition of landscape up front - but naturally leave info regarding geo region open.


Score: 12

I tried to find water flow patterns and lacking that, several of the images contained compression ridges that its pair had nothing similar to. The one piece of information I kept wanting was an elevation/color scale.


Score: 15

observed how similar features seemed to "flow"


Score: 11

I avoided scattered dots without any link


Score: 8

You shouldn't use my responses for your data because i couldn't see all 10 colors. i wanted to see the test landscapes though.


Score: 11

Just looked for real-looking habitat zones (edge, water, etc.). But, this takes longer than 8 minutes by far! Also I couldn't didtinguish 10 indiv. color types at the start of the program. In fact, I could only see about half that, but I took the test anyway (at a disadvantage I believe).


Score: 12


Score: 13

Perhaps I rated myself too low on knowledge/identification. The fractal maps had a greater tendency to look scattered/randomized--they had a greater tendency t producy lower density less connected dendrites/percolation patterns. It is these patters, and evidence of an overall more connected (geologic/ecologic) patterns that I looked for. I am a landscape architecture graduate student interested in the use of fractal lanscape generation in restoration design work. If you have any interest or input into this, please let me know. I have been unable to find a program, individual or group able to help me in the generation of realistic maps with meaningful catagories--can you help?


Score: 11

In the beginning I assumed the real map would have more "detail", then I began to worry that alot of the detail was, in fact, scatter.


Score: 8

connectivity; repeated pattern


Score: 13


Score: 9

Watersheds, outlyers.


Score: 13

I interpreted the white, sometimes dentritic regions as water areas. There should be change patterns around the water but sometimes the synthetic system duplicated this. I believe I could have done a lot better knowing the scale beforehand.

One other technique I used to tell the two apart was to select the least apparently self-similar or fractal looking image.

In the presence of other info, like elevation, I think that my scores would improve. Then again, maybe not.


Score: 18

Spatial Homogeneity, "smoothness" of the boundaries. This means that I automatically rejected any maps with many "single pixel" occurences and "sharp" boundaries.

I also selected based on first impression which means that I did not redo a selection and took 30 seconds or less to make a selection.


Score: 11

Visual correlation of the colored areas with each other. Consistency of shape of the colored areas.


Score: 6

The fractal-generated landscapes looked valid; I think my low score was due in part to a disorientation of scale on my part. I was thinking Landsat scale instead of, say, AVHRR (land cover instead of ecoregion).


Score: 13

Tried to pick the one that didn't look like a fractal...


Score: 8

Connectivity, patch size, linear features and diversity within the each scene. What a terrible score! 40%

Quite a fun test to do - I shall encourage others within the department

regards


Score: 13

Made some asumptions about natural areas eg. no large areas of the same shade, not totally random. However without knowing which I got wrong I have no idea how accurate my assumptions were.


Score: 13

  1. Fine scale detail, geomorphological patterns such as dentritic edges to possible river sysatems
  2. "Blobbiness" of the areas defined by each colour, whether they formed a style of organisation that could be related to some geological control mechanism
  3. I certainly gained the impression as I went through the sequence that the maps were showing at the same scale and that the pairs were related too each other - river features were clearly apparent as the same in both sets of data. I was expecting the synthetic landscapes to be topographically generated becuase I have a geostatistical simulation background.


Score: 11

Recognisable occurences, e.g. rivers, stratafied areas.


Score: 10

I tried to look for the most detail.


Score: 7

I made the assumption that the real landscape map would not have sharp resolution details; i.e., several different geographical terrains would be depicted by the same color, and that the synthetic map would have differentiated these differencies and presented them in sharper details. However, a lot of details needed to be considered and discerned to make the decision.


Score: 11

I guessed a lot but favored maps with large areas of the same color or consistent trends


Score: 14

I'm not sure the method I used was pratical (or even logical). Anyway, I've decided that the map with less 'chaotic' distribution of the colors is the real map, since a computer generated map may have the colors distributed in a more dramatic way due to possible round-off errors, etc...


Score: 8

I basically guessed as I have no experience reading this type of maps. I am one of Dr. Berry's students that you gave the guest lecture for.


Score: 11

I rated myself as only slightly better than poor at this, so all I really could do was try to determine which map looked less complicated or less busy. I only guessed that the computer would enhance an image more than a traditional map. This was really neat.


Score: 6

I tried to look for recognisible patterns (e.g. "realistic" drainage networks, "plausible" evidence of ribbon development along cultural arteries, etc.). But I evidently failed dismally!

I was also thrown by what I thought was a clue (which I tried to ignore but was unable to totally dismiss), in that in every pair, one of the images took significantly longer to draw on my screen than the other. I made a (wrong!) guess that this was the simulated one, and that the delay was caused by a lag in the time required to generate it! You might bear this in mind as a possible flaw (or weakness) in your method - admittedly one that is likely caused by the inherent nature of the medium than anything you have directly caused!

A fascinating exercise, though, and one I enjoyed doing. Thank you, and I would be interested in reading your results when you eventually publish them...

best regards from Ireland!


Score: 11

More like Rohrschatz (spelling?) tests than any kind of landscape I've ever seen. Have you seen the fractal landscapes drawn with VistaPro?


Score: 13

How detailed the larger areas were? The different gradation of color?


Score: 13

I looked for isolated patches to identify simulated maps


Score: 9

by the ones that the colors were basically consistent, not strewn about.


Score: 10

without knowing what the landscape was it was hard to identify real/fractal


Score: 11

Detail characteristics.


Score: 9

1st impression


Score: 13

I really had no clue. Guessing was the best method. Smoothness seemed to be important. Simular colors clustered together. I was not a very good person for your experiment.


Score: 13

Without the introduction, I wouldn't have recognized either of these pictures as landscapes, sorry.


Score: 14

Tried to identify obviously fractal-like artifacts - didn't find very many, though.


Score: 11

I was trying to look for fractalish designs in the landscapes to try and see which ones weren't real.


Score: 8

I looked for the common features, then tried to decide which of the others were more appropriate.


Score: 9

There is really not enough information to make an adequate decision.


Score: 13

Very distorted regions didn't seem natural


Score: 9

My strategy was to attribute higher definition maps to your fractal realizer. As it appears from my test results, that strategy is ineffective for me. I had anticipated that your method would yield a more coherent pattern grouping with greater range of types-- in every instance if I saw more elaborate branchings, or greater range of color, or smaller aggregate grain, I attributed the image then to the fractal generator.


Score: 18

Some of the fractal landscapes were quite good, others looked too fractal, with spidery, repeating patterns or apparently random placement of colored regions. I identified real landscapes by areas of color within color or distinctly non-fractal features.


Score: 6

Very hard to distinguish. Looked for "imperfections" in patterns as sign of real, may be mistaken.


Score: 12

It would have helped if there had been some sort of colour key


Score: 11

Just by lookimg at them - I do not know which 11 I got correct! This is very good ;-)


Score: 8

Fractal landscape tools tend to introduce to many small details. Looks like Realizer is pretty much balanced on that - my result could be better if I just pick random one from each pair.


Score: 14

By its discontinutietes and fragmentation


Score: 7

Both the real and the artifical landscapes looked prety fake to me.


Score: 12

Some errossion on the landscapes didn't look consistent with would happen in reality.

It would make it easier to distiguish between the real and synthetic landscape if context were given to the colors. Enabling one to pick out natural phenomina that don't appear next to one another in nature (and thus see the synthetic for what it is)

All and all, those are damn fine synthetics


Score: 12

drainage patterns and assumption about vegetation


Score: 3

I distinguished the landscapes by the "fuzzyness" of them. I assumed the more "fuzzy" a landscape is (the less solid color blocks and the more "lonely pixels" it has) the more "realistic" it is. It turned out that a vice versa assumption would be correct and given the fact that I had no prior experience with landscape reading my results are pretty good (I was able to distinguish 85% of the landscapes to be created from a single source). Sorry, I think your program failed the turing test.


Score: 14

Natural look, balance, logic, presence of artifacts.


Score: 9

Looking at patterns of vegetation around "percieved" water bodies. I also looked for vegetation patterns in general.


Score: 5

In general, I tried to distinguish the landscapes by looking for continuity in the color patterns.


Score: 17

The fractal landscapes had a "dustiness" - a sort of granularity which occurs when two or more areas meet or overlap in a narrow region.


Score: 12

I try to see if some pattern was inconsistant with close otehr such as small points or small polygone that lok not appropriate for the scale of the map. However the fractal maps look like very close of the real maps and I think it is very a good tool to include in map generation projet who use some modelisation model susch as hydrographic or pedologic.

Thanks for the test


Score: 13

Cluster of colors and scatter. Roughness of edges.


Score: 8

I tried to look for more detail and variance in the maps


Score: 12

looking for "fractal structures", consistant structures


Score: 10

the white areas remained the same between the two images. I assumed, white was water and green was vegitation and went from there.


Score: 12

Assumed images with pronounced feathery structures were fractals I generally picked the first one that came to mind- using instinct rather than logic.


Score: 13

Mainly based on apparent 'randomness' in the map. Most natural features are smooth rather than spikey. Maps with excessive single-pixel elements are more likely generated. Also, clusters in nature follow chains, rather than random splotches. An excellent program, well done!


Score: 11


Score: 11

I tried to think of the original as being a source for the game "life" and the result as what you would get running it for a short period of time and choosing the result as the fake. Unfortunatly, rather unsuccesfully!


Score: 12

No Comments


Score: 12

Basically on what "looked" right. Some were impossible to tell, afew looked "too" detailed to be real, but the scale was much larger than I thought


Score: 15

I haven't paid enough attention to how the realizer generated the maps - it obviously had some data about the landscape as it produced similar looking maps. If I had known on what information it was basing it's map I might have been able to spot the artificial maps better.


Score: 11

After viewing the maps I let my left brain choose the more appealing version; then I kicked in the right brain to see if any features appeared to resemble the classic fractal structures.


Score: 11

Occurrences of Symetry and geometric repitition.


Score: 14

I try to find the same block in the picture


Score: 8

tried to think about topographic maps i used to study in geology classes and compare them to these


Score: 10

Yeah, it's really difficult to distinguish the real one. But I want to know what this test means. If you want to prove that the computer can simulate man's thought in a special field (e.g. paintings), you should give us such two figures to distinguish: one is drew by a man and the other is drew by a computer. So that we can find out how fine or poor the comuter does. Now, you give a real map. Then the test can prove nothing but the computer can simulate a real terrain. This is, I think, not the main idea of the Turing Test.


Score: 10

I try to use continuity in the color zones to tell a real one from a generated one. I assume the basic principle that nature do not have too much sudden changes so if the edges are too 'fractal-ish', i suspect that it is generated. Also many small patches of 'fractal-ish' images might suggest it to be unnatural. Natural weathering process will smooth them out over time.


Score: 17

I mostly used intuition. It seems that the real maps have less isolated pixels of a certain color and more straight edges.


Score: 5

I was guessing. I couldn't distinguish. I went with what seemed to me to be higher details.


Score: 14

I looked for chaotic patterning interruption e.g.straight boundary amidst curly fractal edges. This was unexpected and natural in appearance. Also, "stream" or dendro objects and how they interacted with adjacent polygons. This is interesting.


Score: 13

Being unskilled, I tended to go on estetics. I just looked for some natural 'flow' to the regions.


Score: 13

more block-like shapes for real ones. Tried to visualise effects of water courses etc. Problem as did not know what landscpae features were being mapped


Score: 8

I tried to pick the simulated maps by looking for generated artifacts. Obviously, it didn't work!

I was quite surprised by the similarity between each pair. I assume each of the simulated maps were generated from it's real counterpart.


Score: 12

I wonder how valid the test is as the colors are very difficult to understand, what aspect of the landscape are they supposed to represent? Your combination into a ecoregion is not necessarly the way many would view a landscape. Until a key to that is provided it may be difficult to present an objective test to see how good the fractal test is.

Why not pick a less complex way to represent the land for the purposes of your test.

Just what does the reference map represent anyway?

also as you consider this score - both images failed to appear on three of the pairs.


Score: 11

I tried to look for unnatural anomalities. I think I spotted weird dots, too much a "fractalic appearance", and too symmetrical geometrical shapes (such as a letter "x"-like thing in one picture).

This was nice, and I'm happy I could help.


Score: 13

continuing features across multiple colors, like the fractal-like forms of "rivers". Also, a lot of gut feeling.


Score: 13

Usually, the fractal landscapes were more complex than the real landscapes. Real landscapes seem to have more erosion, or their surfaces are smoother. I concentrated on water-erosion areas (like rivers) and took note at the complexity of the drainage areas. More complex erosion is real.


Score: 15

I looked for congruency in colours and colour borders. I looked for a mixture of simplicity and congruency.

Example: If maps had drastic colout changes along borders, I did not view them favourably. If maps were spotty or overly complicated, I did not think them natural.

Hope that helps.


Score: 14

It sucked


Score: 4

That's pretty good! Either that, or all of my computer-like personality traits aren't imagined.


Score: 11

Tried to find "fractal" features that looked like something generated with e.g. Fractint, under way I got the idea that the real images had more isolated points.


Score: 13

After the first several pairs, I looked for "blobs" or clustering effects. I did not find this test very meaningful because (a) the maps were very hard to see on a high-resolution monitor, (b) I had no idea of what I was looking at, and (c) in many cases it was clear that the artificial map was derived very closely from the real map--there was extraordinary overlap of common pixel values. Thus, there was very little basis for discrimination, so it is not surprising that people (or even a metric) would have little power to discriminate. All in all, this is more like a native English speaking person trying to take the original Turing test on a Kanji terminal.


Score: 16

I tried looking for realistic landscape patterns, usually landscapes are not overly fragmented, well sometimes they are... but they are mostly consistent within an area. Besides drainage patterns, striations resulting from topography are usually consistent over an area


Score: 16

by the boundary roughness...and the presence of isolated dots.


Score: 13

looking for linear features and assuming they may be river and river valleys - if they were continuous then I thought that more likely to be a "real" map. Otherwise gut feel, but the results aren't much better than random!


Score: 14

I'm guessing from the images that some data is provided the fractal realizer, which then interpolates based upon fractal pattern algorithms. Otherwise, the real and synthetic maps would not have looked so similar. It was quite difficult to determine real landscapes versus synthetic because of the low-resolution depictions. For most, I had no idea which was actual, and felt that either could be. Without knowing which I identified correctly/incorrectly, I don't know if the visual criteria used were accurate. Maybe you could provide test results with answers adjacent to the image pairs after completion.


Score: 2

I tried to distinguish the mpas based on mental comparison to the thematic landcover classification maps I create from Landsat TM imagery. I incorrectly, but consistently picked the artificicial landscape as the real one. The minimum areas and interspersion which seemed realistic to me, based on 30M pixel resolution, obviously doesn't correlate with 1KM resolution.


Score: 1

Very nice. However I noticed that while these are the same place the patterns predicted differ. Therefore what is the accuracy of this tool when compared to the actual imagery?


Score: 14

I looked for fragmentation that did not follow a pattern. For example, it is possible to have single pixel colours on their own (ie surrounded by another colour), but I still expect them to be, say localised (not everywhere), or following (squiggly) line. Also found I could recognise features such as rivers occasionally.


Score: 11

Similar-sized large areas of several different colors near each other I regarded as being artificial.


Score: 11

The white seemed to indicate rivers to which I tried to match the rest of the environment.


Score: 11

I have to admit that at least some of the time I was looking at the maps, in part, as coastlines and elevation maps. This is in part due to your comments prior to the test about not looking at land use etc, and due to my previous experiance of colour elevation maps in eg. Vistapro and other landscape generators. I looked mainly for boundaries between colour regions which looked too broken up, and indivual pixels in the middle of solid regions of a different colour - random noise effects which I intuitively expected the generator to produce.


Score: 10

at first I went slowly looking for one color within two or more colors that seemed to cross two or three boundries. Then it got boring so I tried to gues by quickly.


Score: 12

I looked for clustering of a color, and a generally consistent progression from one color to another. Rather like looking for the snow-caps on mountain tops, one wouldn't expect to see white areas on low elevations.


Score: 14

I tried to distinguish the landscapes based on degree of local homogeneity, using the great geography axiom that 'closer things tend to be more similar.' I also tried to identify key features such as rivers and then determine whether I thought the pattern was reasonable for that given geographic feature.


Score: 7

I think this is quite a difficult test for the person (easy for the computer) because I was not sure what a map should look like. Every map is different depending on scale, information, cartographic methods. I thought initially that the real maps would be more detailed, in terms of the complexity of the edges, half way through I thought the oposite was true. I think I would have done better if I was shown some classified examples. On the other hand, some pairs seemed to have no discernable difference.


Score: 8

By thinking in geological aspects about the distribution of lithologies


Score: 11

I pretty much just tried to identify the most simple and yet most natural pattern.


Score: 15

If you have seen and tried on the triadic koch curve and other similar curves before, you can easily do more realisitc comparation between two kinds Especially the edges of the figures on the synthetic maps look like these mentioned curves but the real ones don't


Score: 8

I was uncertain as to whether the fractal realizer over-emphasized the fractal nature of the landscape.

Also I was uncertain how disturbed these areas were which would have some bearing on the fractal properties.

I would like to see which ones I go right and wrong to get a better feel for what the realizer is producing.


Score: 8

The amount of fragmented pieces. Also, straight edges in some patches.


Score: 11

I tried to spot the computer image; if it looked too perfect then i assumed it to be computer... if any part of the image contained somethings that looked like a fractal image, i compared them and assumed the clearer one was made by a computer. :) good luck!


Score: 14

Clustering of original land/sea features to landscape features


Score: 9

Looked for too consistent fractual paterns - non-multi-fractual appearing.

Some of these images appeared to be identical excepting pixel coloring - this is bogus!


Score: 13

looked for biomes/areas that looked too isolated, tried to imagine water flowing through the landscapes, inagined actual fractal graphics i have seen before


Score: 11

Elevational Gradients, boundaries with nodata, linear hydrology great exercise, Bill!


Score: 13

I looked for patterns that were more fractal than I would expect from nature.


Score: 12

I tried to look for exclusive pattern combinations of classes, e.g. odd derivations from overall picture pattern. But frankly, it was difficult. We had no clue on what map type you showed (thematic maps, classified RS images,...)


Score: 16

in some cases, the simulated landscapes looked more real than the real ones - that was the giveaway! I have worked with stochastic simulation tools in petroleum engineering, so did have a bit of a clue.

the synthetic landscapes are more fragmented than the real ones; i.e., they have less continuity. They also,as I meant above, look too typically fractal - nature doesn't usually start off on a flat playing field, but is influenced by the underlying topography.

good fun - thanks.


Score: 12

I looked for evenly curved lines and not jaggedy borders.


Score: 14

I looked for common patterns that fractals generate, patterns that are uncommon for map generation, but most I looked for the fractal pattern generation. Also, I don't think this fits the definition of the Turing test. It certainly is a decidability problem, but you've made assumptions about the interviewer and what questions that person would ask... But if you say so.


Score: 10

Based on my experience with fractal generation.


Score: 9

I tried to look for patterns that might represent hydrographic features (to no avail).


Score: 11

boundary characteristics - discrete versus "fuzzy"


Score: 8

Looking for thin tendril patterns. It didn' work very well did it?


Score: 13

General "smoothness"?


Score: 5

By looking for natural looking forms and unnatural changes in landscape


Score: 14

I tried to look for patterns across different scales, and also for artifacts that seemed to be linked in some sort of sequence through a larger chain of objects, a bit like watching the spirals and floating things in a Mandelbrot or a Julia set, or the plot of the complex solutions to a polynomial.


Score: 8

how i tried to distinguish : differences in complexity of color patterns in the two images; whether certain patterns were "too" generated (i.e. too "paisley"); whether the overall shape of the picture seemed like it was generated or random.

comments : this is an unfair test!


Score: 8

fractals are often too complex looking, so I chose the 'simpler' shapes. Obviously that was the wrong strategy. :)


Score: 12

There are structures in the pictures, flowing from one direction to another. I am aware of irregularities in these structures, parts that don't fit to each other. And there are pictures only mirrored I've seen before in the queue easy to identify. But nevertheless I don't have any clue what the pictures really mean, I sit down and view them from 4 meters away from the monitor to let my intuition make the discision.


Score: 10

I image the position of river and hills, then I soppose where would be the vegetation.


Score: 10

I assumed "spikier" landscapes were more likely to be generated, and "smoother," more cohesive landscape to be real.


Score: 12

look at size of finger like extensions and how common they were, look at "reality" of edges, look at internal detail of landscape edges


Score: 9

It was completely an intuitive response at first. By landscape choice 3 I realized that I could not tell the difference between real and neutral landscapes.


Score: 13

Looking for obvious/ repeating anomalies.

Without context, it was too difficult to determine which is real.

Good ideas, here!


Score: 14

Au pifomêtre!


Score: 10

Anomolies


Score: 12

Your are cheating! Your program either uses an existing map as base or finds a real map segment which matches its synthetic map well.


Score: 9

I looked for Continuity; A landscape with splotches seemed fabricated. If there was flow apparent--if it looked like an elevation plot--i was tempted to vote for it more.


Score: 19

real landscapes hardly show any pixels that are not 8 connected with fellow class members


Score: 10

Looked for hard lines/dithering- general lack of smoothness.


Score: 11

I looked for paterrns inside patterns that struck me as odd


Score: 13

Some of the maps had a more "fractal" look about them ie. tendril-jlike curves.


Score: 13

Continuity of like areas. I had thought that fr. l. tended to diverge a bit too far and make thin "stringies".


Score: 15

it is just a holistic "looks good and right" approach


Score: 11

I suspected that the generated landscapes might under-diversify land-class categories around what appeared to be surface-water structures (e.g., streams). Apparently not!


Score: 9

Mostly I looked for linear artifacts or things that were anchored on area midpoints (1/2, 1/4, etc.).


Score: 12

The fractal landscapes tended to have more noise in them; not to say that nature isn't noisy, but nature tends to organize that noise in a nice way. With more effort, I think it would be possible to improve my score (if that were a goal) by looking at the pictures more carefully, and attempting to look again for the "noisy" picture, and disregard it.

Without looking at each pair again that I viewed, I couldn't point out the exact features that I felt gave away each map; however, a second pass through, I could probably point out what "gave away" the fractal generated map in the ones where I picked it out.

Of course, of the 12, I think I was only fairly certain about 7 or 8...


Score: 10

ceratin images looked more fractaline.


Score: 9

Similar colours following narrow river channels.


Score: 11

I looked for flow of landscape elements and tried to envision a landscape with similar structures.


Score: 9

autocorrelation & distinctive features..obviously it did not work very well


Score: 15

Some of the patterns were repeated across the different maps and were quite similar so i thought they were unlikley to be real features


Score: 13

I looked for more "natural" (for lack of a better word) patterns in the colors. I wasn't necessarily looking for more detail to identify the real map, but I tried to associate the placement and nature of the detail with my own interpretation of what comprises a real landform. The difficulty in this was that there was no context for the colors; I was left to make several assumptions based only on the shapes of the different colored areas.


Score: 9

Proximity similarities (gradual breaks from region to region)


Score: 15

variation in pattern


Score: 7

patterns of clusters

I tried to "see" water body patterns, land cover patterns, etc.


Score: 12

The map with more straight lines


Score: 9

I picked randomly


Score: 12

more accurate but i didn't do so well


Score: 7

No.


Score: 8

I have no clue


Score: 10

looked for detailed not blob-like coloring


Score: 12

by what I thought might be more realistic in color variations and areas surrounding larger or obvious objects

I did not like this test because I have no experience with fractals or reading maps like these.


Score: 10

some of the landscapes looked like sections of the earth as some continents would look on a world map

this was an interesting test


Score: 10

i have no idea which were the simulated and which were the real landscapes. to my untrained eye, i can see that some are more detailed, but i don't know whether those are the real or the fractal.


Score: 13

I tried to determine which one had the more realistic overlays and gradations


Score: 8

I tried to identify areas where there were both networks and contrasts perpendicular to the networks (such as elevational contrasts). The realizer seemed to do well in fooling me. However, I'm not extremely familiar with "ecoregion" type maps. These types of maps are usually produced through a human-based "clustering" scheme which yields patterns that are somewhat different than one would expect for pure variable maps such as mean precipitation or soil water. Because of this unnatural "clustering", I'm not sure how realistic the Realizer is in simulating real ecosystem patterns.


Score: 16

changes in fractal dimension indicated real landscapes. Also, sometimes it looked like there was information that "couldn't have been made up"--this must have been the real image.


Score: 14

I was influenced by the degree of correlation or noise in the figures. I think the fractal images contained more noise and some subtle mis-correlations of patterns.


Score: 14

I didn't distinguish them very well. However, the artificial seemed "too fractal", i.e. they didn't have clear edges.


Score: 10

I tried to look for relatively smooth transitions. Since I only scored 50%, its likely that my choices were random.


Score: 12

It's tricky because of the resolution


Score: 10

I tried to look at patterns reminiscant of previous terrain maps I had seen.


Score: 12

examination of the edges


Score: 8

I looked for complexity in the maps


Score: 6

I have a set of braided streams spacial data from the middle Platte River valley. They came from my research project. I would like to see how well your Fractal Realizer works on them.


Score: 12

This is not a realistic test in that many of the natural regonition elements are damaged by the false colors.

Also, you purposely chose fractals to match the actual real estate, in significant areas of the image -- i.e, the fractal images are cooked.


Score: 15

I tried to look at the dispersion patterns,... how colors looked.


Score: 10

I tried to look for things that looked discontinuous and not normal for geographic maps. But most were very difficult to see anything that gave any hint either way. This is a really cool program.


Score: 9

I looked for signs of obvious recursion and also tried to use a kind of "common sense" in what nature actually does in my experience. Wasn't easy :)


Score: 10

It was very hard. The resolution was too low I felt. It is very easy to create fractal forgeries at low resolution but much harder to create fractal forgeries at high resolution where the fine detail is important. This is because althoug most natural objects (i.e. landscapes) have self similar properties they are not truly self-similar in the same way that fractals are, the shape and form varying with further detail. I am well aware that it is possible to vary the fractal nature at different altitudes, frequencies etc but very hard model reality in that way at high resolutions.

I attempted to select the real landscapes in your test by choosing ones that had the most varience between macro and micro scopic detail - with spectacular failure. I do believ that this method would work better with higher resolutions however.


Score: 12

I looked for riverine features, although sometimes these appeared exactly the same in both maps.

I used intuition

I chose the one that seemed less broken up / fewere tiny regions.

Some were more similar / difficult than others.


Score: 17

My experience of such maps is with radio propagation modelling for cellular radio systems. Sometimes the features appeared just too random and scattered, or with too much variation in a small area. Some were tough though, and maybe 30% of my responses were guesses.


Score: 13

Real Landscapes are more irregular.


Score: 13

Looked for fine structure, looked for elevation cues


Score: 12

I looked for complexity rather than simplicity in spatial patterns


Score: 12

I tried looking for too-drastic changes in the colorings, mainly. Otherwise, I went on feel.


Score: 7

I looked for spindly finger like structures and called that one the fractal one; aparently it didn't work.


Score: 14

expressions of linear (prob drainage) patterns


Score: 9

Quite noisy landscapes. Find areas, which share color/connectivity, and try to spot weird patterns which repeat that connectivity. Although, mainly both images had that :)


Score: 8

I thought that the ones with more jagged and discontinuous outline of regions were more likely to be real, but since I scored chance or thereabouts, then that assumptions is not correct!


Score: 9

Both maps are grid models of the real world. Both are equally false or real. How can one grid map be any more "real" then another? Each simply is built in a different manner.


Score: 8

hmmm..


Score: 10

  1. Avoid features that seem to hang out in the middle of nowhere or overlap each other as if unrelated
  2. Pick the one that just "looks like something."

You didn't ask how realistic the real landscapes were--to my eye, not very! Also, the program seemed to be trying to immitate the look of the real scene. I think this may bias the test (although in what way I don't know!). Maybe half the questions should have dissimilar scenes, where the program-generated scene is not based on any one real scene but on something like the overall statistics of your real data set. Thanks, it was fun!


Score: 13

more smoothness of transition in real landscapes


Score: 11

I gave the low rating on realism for the simulated maps, but the real maps were pretty unrealistic too. I tried to pick maps that looked like the water regions (usually white?) made sense with what was going on around them, generally.


Score: 9

The simulator gives some very realistic looking landscapes, at least at this level of abstraction. There were often consistent differences between the two pairs, althoug it was difficult to know which was which, suggesting that with some experience, it would become easier to tell them apart. One map often seemed to have more large scale lumpiness. There were also elements that were exactly hte same between the dyads, suggesting that some sort of constraint or starting point was given the simulator.


Score: 7

I chose those which felt more organic. It's strange that the vast majority seemed to be on the left.


For additional information contact:

William W. Hargrove
Oak Ridge National Laboratory
Environmental Sciences Division
P.O. Box 2008, M.S. 6407
Oak Ridge, TN 37831-6407
(865) 241-2748
hnw@geobabble.org

This Site vi powered The Fractal Realizer
Last Modified: Thursday, 17-May-2007 11:58:22 EDT
Warnings and Disclaimers