Tag Archives: Fractal Architecture

The genesis of complex geometry

I don’t believe that there is a dichotomy between a supposedly modern and traditional architecture. Instead there exist different geometric processes, and while traditionally builders have employed nesting processes in their work, for perhaps no other reason than it came naturally to them, modern builders have restricted themselves to linear geometric processes due to drawing their inspiration from Cartesian science and engineering.

In attempting to transform architecture into a vessel for artistic expression, modern architects have been trapped by their limited tool set, and the product of their work has often been confusing, silly, or utterly corrupt. There are only so many tricks that one can perform with linear geometry, although computers have extended the reach of those tricks. But the confusion of modern architects becomes even more obvious when they ascribe artistic merits to traditional builders who never aspired to be artists at all. One such instance is the introduction of a recent biography of the 18th century french military engineer Vauban by official starchitect Jean Nouvel, who described Vauban’s fortresses as an early form of land-art and morphing. Jean Nouvel asks, could a man be an artist without being aware of it? Vauban was not an artist at all. Military necessity led him to employ geometric processes that significantly increased the complexity of fortifications, and it is merely incidental that today we find his projects to have artistic merits.

The process through which Vauban’s work became worthy of architectural praise provides the key to the distinction between linear and nesting geometry. Vauban was not himself the inventor of the star fort. Those had been around for more than a century when he began his career for the army of king Louis XIV. The basic star fort was a simple concept: the old masonry walls of the medieval age had shown themselves to be obsolete with the advent of cannons, and they had been replaced with thick banks of earth dug out of trenches whose major flaw was to provide space out of reach of defensive fire at its angles. The angles were thus extended into diamond-shaped turrets in the first pass at a feedback correction, introducing nesting geometry and initiating the first step of the genesis of a fractal.

180px-Neuhäusel1680

A basic, early star fort

While the star fort was successful at resisting attacks, it was not impregnable. A method was devised to capture them by digging trenches in zig-zagging patterns through which troops could assault the walls without being exposed to cannon fire. In fact this is how Vauban built his career, and some of his “plans” for besieging star forts are significant civil engineering projects of their own.

Siege de Turin 1706

The siege of Turin. From an encircling trench, Vauban built successively denser trenches to capture the citadel and take the city, a process that was extremely expensive and time-consuming.

While star forts never truly became obsolete (as medieval fortifications had) until well into the 19th century, military engineers did improve on their effectiveness by correcting their vulnerabilities, which happened to be at the angles they were characterized by. And so, by another layer of feedback, the geometric depth of the star fort concept increased.

Citadelle San Martin

San Martin Citadel, a “second generation” star fort.

Vauban’s great invention was nothing much more than repeating this process of increasing depth one more time, creating what many now consider to be his masterpiece, the Citadel of Lille, a showcase of complex geometry made from the refinement produced by centuries of feedback of the star fort concept.

Citadelle de Lille (2)Nouvelle enceinte de Lille

Citadel of Lille and the system of fortification of the City of Lille, as designed by Vauban

If you only understand Cartesian processes, then the only idea that may come to you to improve on the basic star fort would be to add dozens of diamond-shaped turrets, a change that would most certainly make the concept worse instead of better. The military engineers of the time however were well aware that the diamond turrets were optimal in their shape. What was needed was a shape that extended the diamond, and this was achieved by increasing the depth of the whole object.

Another aspect of the complexity of a geometric process seen in the Lille example is its configuration adaptiveness. The shape of the city and the surrounding landscape is completely random, and the encircling fortifications bend to match this randomness, leading to Nouvel’s claim that it is an early example of morphing. But once again there is no deliberate attempt at morphing going on. Since each component of a star fort is defined as a recursive relational transformation of the basic wall, Vauban only had to design the wall and the other parts aligned themselves as a result of the wall’s configuration. If the outcome has artistic value, it is once again only incidental.

It is important to note that the Vauban extensions to star fortifications did not mean that the simple 3-part star fort became obsolete. In fact many simple star forts were built in the 18th and 19th century in America as the threat was low and the cities to be defended underdeveloped. The difference between a simple fort and Vauban’s complex fort is one of depth and effectiveness, and there is a real cost-benefit choice to make. The star fort only became obsolete when the bunker replaced it, and the early bunkers reset the process of complex geometry genesis by being simple concrete shells in their early incarnations.

When we undertake to create symmetry in an urban environment, we want buildings to be as alike as possible while allowing for adaptation to context. If we understand geometric depth we can build in such a way that poor and expensive buildings have the same basic design in their first levels of geometry, but expensive buildings have many more scales of geometry nested within that basic design. It is not necessary for an entire city to be made of the same materials as materials are one of the last visible scales of geometry, and so we can have a city of mud bricks and marble buildings that nevertheless share 95% of their geometry and beautifully complement each other, while both poor and rich citizens have a home adapted to their situation.

We can look at these examples from Korean traditional architecture for an illustration.

48799484.CIMG0512Tomb_of_King_Tongmyong,_Pyongyang,_North_Korea-2

On the left is a simple house and on the right is the tomb of a great king. Both buildings have the same design, but the building on the right has much greater depth in this design.

Another interesting comparison is between the Golden Gate bridge in San Francisco and the Verrazano Narrows bridge in New York.800px-Golden_Gate_Bridge_from_underneath800px-VerrazanoFromNCLDawn

The bridges are the same in design, but the Golden Gate bridge has more depth within this design, and is for this reason the more famous of the two bridges. That doesn’t mean the Verrazano Narrows bridge isn’t beautiful on its own.

And to make things as simple as they can get, we can compare a Sierpinski triangle with four levels of iteration with one that has six levels.

Geometric depth

The fractal on the right has all the same elements as the one on the left, but also has more.

A lot of the residential buildings we create today would benefit from being more like the Verrazano Narrows bridge. They try to be more than a simple house for a simple family and end up covered in tacky, useless ornament that have obviously been forced into the design. Simplicity, if it is adapted to context, can create as beautiful a landscape as complexity. Postmodernistic nonsense geometry does not. We would be better served going back to the simplicity of 1950’s international style modernism than what is being built by architects today. The best architects would reinvent it with greater depth.

Previous topics

References

Vauban, l’intelligence du territoire

Hommage a Vauban 1969

A modern artist’s homage to Vauban. This artist did not understand complex geometry.

A demonstration of complexity in London

The immensely productive Physicist-Mathematician-Entrepreneur Stephen Wolfram theorized, based on his studies of cellular automatons in the 1980’s, that there exists four classes of physical processes in the universe. Class I is simple continuous behavior (line). Class II is repetitive behavior (checkerboard). Class III is nested, hierarchical-fractal behavior (basic fractals like the Sierpinski triangle). Class IV, the most fascinating, is chaotic behavior (random fractals such as the Mandelbrot Set). Wolfram believes that Class IV behavior, exemplified by the Rule 30 automaton, is behind the complexity we see in the universe, and that very simple generative rules produce it.

The way we as humans are used to doing engineering and to building things, we tend to operate under the constraint that we have to foresee what the things we’re building are going to do. And that means that we’ve ended up being forced to use only a very special set of programs–from a very special corner of the computational universe–that happen always to have simple foreseeable behavior. But the point is that nature is presumably under no such constraint. So that means that there’s nothing wrong with it using something like rule 30–and that way inevitably producing all sorts of complexity.

Wolfram gave this speech on his new science to big shot architecture schools at Yale, Princeton and MIT. He believes that his new science has profound implications for the generation of form in architecture. I agree with him, but not for the reasons he provided. In fact his classification of the geometric properties of different physical phenomenons provides extremely profound insight into the history of architecture, and its future.

A visit to London was what really made me appreciate this insight. London, as an architectural artifact, is quite unique in that its greatest period of growth, the period 1750-1850, coincides with the beginning of modernism in architecture, a time when architecture became in a sense aware of itself and in search of its meaning. Neoclassicism was followed by Gothic Revival, Romanesque Revival, Neo-Venetian, all of it got mixed up in eclecticism, and the invention of new materials and building processes came to confuse things even more. Regardless of stylistic debates, what may be most important about that period is that, for the first time in history, large capital funds for speculative real estate development became available. Where architecture had once been a piecemeal business occurring quite randomly, in London, for the first time ever, housing subdivisions were possible. The result was the terrace housing.

Chelsea South Kensington

The big housing developments in London were initiated by aristocratic landowners who hired architects to plan and control the form their estates would take. Walking through Chelsea and South Kensington, one is faced with sometimes overwhelming repetition of identical houses. Class II behavior, that Wolfram claims is fundamental to engineering, is obviously visible. The architects of the estates, not really knowing the specific constraints of the future residents of the place, opted for endless repetitions of the same building. The fact that each building is a copy of the next, inadapted to the particular wants of its occupants, makes it standard behavior, far from complex.

The human mind is by nature fractal and is repulsed by Class II geometry, which is why traditionally architects have built Class III, hierarchical fractal geometry. This was employed by some terrace builders, such as the architect of the Regent’s Park estate, John Nash. Here the monotony of the model is interrupted by nesting houses in flourishes like arches, or bigger houses with large porticoes.

Cumberland Terrace, Regent’s Park, London

You can see a 19th century panorama of this terrace here.

Classical architectural education, based on the teaching of the classical orders, trained architects in the art of doing such hierarchical decompositions of their buildings. As such most of the high western classical architecture, starting from the renaissance architecture of Alberti (the first modern architect in the sense that his name is more important than any of his buildings, not true of the medieval architects of cathedrals), is rigidly symmetrical. Classically-trained architects only expanded the scales of decomposition as the size of buildings increased, up to the neoclassical skyscrapers that modernists considered to be ridiculous. The classicals were right about the need to create fractal geometry by decomposition of what were rigid engineering plans, what the modernists claimed was ornamental crime, philosophically dishonest and replaced with elementary repetition in their designs (regression to type II geometry). People have hated architects ever since.

Whenever I read through architectural history books, even those of honest traditionalists like David Watkin, I am struck by what is clearly missing from the record. That is to say the towns built up over centuries, the accretion of simple building acts into complex symmetries. The topic is touched by some thinkers of urban morphology, typically under the label of “organic” growth, such as in The City Shaped by Spiro Kostof, but everyone appears dumbfounded by the means through which such symmetry was accomplished. And largely the whole career of Christopher Alexander has been dedicated to decoding this mystery.

Andalusia

But even in the 19th century, when large-scale development was sweeping London, some complex geometry was achieved. These are four distinct buildings on Lincoln’s Inn Fields.

Lincoln’s Inn Fields

We immediately notice that each building is different from the other, having been built for a unique purpose and therefore being a unique solution to a unique problem. Despite that, the buildings form a harmonious geometric composition because they share many transformations to which randomness is applied. Even within one building, Lincoln’s Inn on the left, randomness is visible. The tower is unique, but symmetric with the rest through shared transformations. What we are seeing here is, I believe, a genuine Class IV pattern.

How could this be possible? If Wolfram’s theory on the origins of complexity is correct, then there must be a very simple rule to produce this kind of street scape. This rule can be applied to any random architectural demand and provide a perfectly appropriate solution to an individual problem while remaining completely harmonious with other such random solutions in its neighborhood! Since such organic complexity appears in all human civilizations, then we must conclude that every single building culture in the world has known, at some point, such a rule, and has applied it to solve building problems of all forms. Without understanding how these rules created complexity, they simply repeated them after each successful building.

What to do with new technology? New technology necessarily creates a new scale into the rule, but the remaining rules are still valid. This is visible in the glass structure appended to the Royal Opera House.

Royal Opera House

We can see many shared patterns between the central structure and rightward structure, but not with the new addition on the left. Typical of modernist architecture, the left building is only made of elementary geometry, barely even qualifying it as a Class II structure. It doesn’t feel as though it belongs there at all. There is an important lesson here, one that architects I fear do not want to learn.

Wolfram claims that complexity science is about finding simple rules that can generate complexity. We can decode simple rules from traditional architecture that, even with the modest means of poor villagers, will generate complexity when applied repeatedly to random events, creating random fractals while simultaneously solving a vast diversity of unique problems. This is exactly the kind of work that good urbanists should be doing today, and from there we could allow maximum diversity in our cities without breaking symmetry and harmony at costs as low as the meanest buildings currently cost. If Wolfram is correct, then the rules may be so simple that they may be easily codified into building regulation even by the dullest bureaucrats. Then again the behavior may be so complex (that is to say there is emergence) that no a posteriori codification is even impossible, and the processes by which cities are governed may have to be completely reconsidered. Either way this is not good news for architects. If architecture is so easy, then their idiosyncratic designs are not necessary nor valuable. The big shot schools of architecture that Wolfram visited will be made irrelevant by Wolfram.
References:

Mathieu Helie – Complex geometry and structured chaos
Stephen Wolfram – The Generation of Form in A New Kind of Science
Christopher Alexander – The Process of Creating Life