WWDC 2014: Metal is serious business

After hearing and reading all that has been said and written on the metal theme Apple representatives, the best of the analysts came to the only correct conclusion: Metal for iOS 8 and the Apple A7 is a reconnaissance. Something much more important will not keep you waiting. Apple decided to take full control of another thin place in all its systems, carefully weighing all the “pros” and “cons”. The level of secrecy was the same as when Steve: does not happen any leakage. The venture was very risky.

Agree whether companies are developing programs depend on OpenGL and/or OpenCL across all platforms, including iOS and OS X? Some of these programs were very important for Apple. Apple’s share in the earnings of these companies though was significant, but the loss of their livelihood is not threatened.

What will they prefer: to spend money to develop in a non-standard API or to lose part of income? Until you try you’ll never know. And it is best to move to it slowly so that you time to slow down, if something goes wrong.

In 2014, not everyone agreed with the best of analysts. Many did not take it seriously. More important happened in June 2018 – but this is not our topic.

This is a continuation of a series about WWDC 2014, previous parts here:

First part: WWDC 2014: Apple’s 25th WWDC;
Second part: WWDC 2014: Remembering QuickDraw 3D.


The main culprit of the incident – graphic processors, and madmen have paid for their rapid progress in their own health, life and wallet. Gamers. You know what I mean not fans of Tetris or solitaire.

The performance of the graphics processors (GPU), year after year, growing much faster than CPU performance (CPU) is responsible for all. In the end, the relationship between CPU and GPU revolutionary situation: tops couldn’t (cope), lower classes do not want (to think for themselves, because they could not).

The GPU is much easier. They consist of a large number of identical units working in parallel. These blocks don’t make decisions, don’t have a clue about what is happening around them, they quickly and efficiently carry out orders, not arguing.

Their performance was a massive demand in the pursuit of pleasure run down on the coolest computer games and quests have played an invaluable role. If they do not, investment in the rapid development of factories of dreams named GPU would be more modest.

Graphics power required and serious professionals, but many of them?

By the middle of the first decade of the XXI century, the Central processor (CPU) is not able to fully use the power of the GPU. The same problem was faced by the aviation armorers when supersonic speeds are commonplace – it all happened so fast that the crew is not physically have time to assess the situation and aim.

Now as people turned out to be the CPU. Now they could not “fly” the logic of the game, to prepare and test the beyond precise command and display the results on the screen.


Method Apple

If you would like to know why the Apple product, despite the huge numbers on their price tags, paying almost no attention to the strange restrictions and weird restrictions (only grumbling and spitting) almost always have success, the border conflict between the CPU and GPU have shown this secret more clearly than anything else.

Something they do all the time, this is the essence of their method.

The first Mac was absolutely impossible: he had some good performance characteristics for its time, but only for the implementation of its main “chip” – the GUI – they needed much more than he had.

The company’s engineers with a jeweler’s precision drove all the structural elements to one another, showed fantastic ingenuity and restricted user access to the most tense joints in it.

The first iPhone was absolutely impossible, but they made it the same way.

Use the power of the GPU at least half was impossible, but they learned how to do it. The relationship of the CPU and GPU is an incredibly complex area of human knowledge. Streams, pipelines, shaders, boosters, rendering – it is written many thick books, before reading which you need to read lots of other books, and to understand their…

But how they did it reveals the essence of their method is brighter and clearer than anything else.


The interaction between CPU and GPU engaged “translator”, which was far from optimal. The translator was OpenGL. The language, which in 2014 was already 25 years old (as WWDC by Apple), universal, supporting all worthy of this platform and all deserving of this graphics processor from the newest with their almost thermonuclear power to the most archaic, of the past Millennium.

In the name of compatibility with all this diversity, and that are not overly complicated, OpenGL has approached its tasks in a generalized way, trying to consider all at once, because a universal tool needs, first of all, to accurately do what it is intended. Otherwise it would be useless. Rest is also important, but not so much anymore.

Compatibility simplifies the distribution of graphic solutions across platforms. And this is also important from a commercial point of view, it is probably the second aspect of importance.

Of course, OpenGL was not optimal on any of the platforms where it was used, it was a compromise solution. That is, the “thin place”, where the chronic at its core, the conflict escalated and caused significant damage.

For the sake of clarity (which, as said a real Muller, one of the forms of complete fog), we investigate the interaction of the CPU with the GPU on the example of a typical “shooter”. The game is a kind of logic, characters with some properties, the landscape, weapons and similar virtual realities. This is the area of responsibility of the CPU.

In every unit time (30..60 sec) current game situation to be displayed on the screen. At the very last stage deals with the display GPU, but because of their stupidity it needs to be absolutely accurate and very detailed instructions.

These instructions preparing the CPU. OpenGL, every time, needs to draw all the objects you want to appear on the screen. Specify their physical properties (color, material), to determine the source or sources of light, recheck the “Shader” (small GPU functions to be performed) the control shadows, reflections and a host of other things, and perhaps, transform their source code into commands to the GPU.

The fact that all this is done again and again, it makes sense. If you are the supported platforms with a variety of “weird”, GPUs with different capabilities.

1/30 second for a modern CPU – time. But even in a highly abbreviated set of actions on the rendering of a single screen is impressive. In fact, from the CPU requires on the order of two or more actions. The result: GPU waits patiently for CPU will prepare everything you need, rapidly takes the new instructions, and waited again.

1/30 second is often insufficient. The game slows down, although the GPU the most, on which to boil kettles (if not the cooling system).

Recipe from Apple (“cook” Metal)

First, you need to concentrate on a single GPU. On that which is embedded in the Apple A7, PowerVR G6430 4 cluster configuration. On the edge of the main shock. The rest (yet) ignore.

Second, write a “translator” with a clean slate, as if no OpenGL never was (but remember all the things I would like to avoid).

Since we exactly know the strengths and weaknesses of a single GPU which need support, consider only them.

Thirdly, all make the brackets for a particular case to the GPU, and if possible, prepare in advance all the shaders, physical properties, elements of the composition.

If it is short.

And at this point Steve said “boom” – in some cases, the performance of the cords of the CPU and GPU have increased 10 times. However nobody never demonstrated such cases, but the acceleration is 5-6 times happened more often.

Well, FIB (slightly) marketers, their job.

Increased not only the speed – the CPU has plenty of extra time that could be spent on logic games to increase the number of heroes, for more realism.

At WWDC 2014 showed the Zen Garden. On Apple A7, to use common OpenGL shows it would be absolutely impossible.

And all this only for c Apple A7 PowerVR G6430 4 cluster configuration? Doesn’t this sound like the absolute and unforgivable stupidity, for which all (even harmless) can not be ashamed?

No. Those who are well versed in this area, in June 2014 predicted what happened in four years.

This was only the beginning. Metal was able to replace OpenGL (and in no way to blame OpenCL, which just came to hand) first in one particular combination (iOS 8 and the Apple A7), then in another, then…

Other reasons for creating a Metal

Even if Apple carried the damage from failing to OpenGL efficiently perform his duty until 2009 or 2010, give it the Apple would not. It would be a mistake, which could result in the death of the company.

Before the mobile revolution Apple is too heavily dependent on the willingness of developers to migrate well-known, more common platforms program in OS X.

Now the company is one of the market leaders in mobile devices, sometimes the absolute leader.

And the success with the organization develop their own processors (CPU and system-on-chip with their participation) added to them self-confidence.

Also, if you haven’t tried programming in Metal – try it! It is very beautiful. OpenGL on a background of Metal looks like a stock Ticker over the past year amid a brilliant exciting prose.

To be continued

To discuss the history of Apple you can in our Telegram chat. There is about Metal a lot.

Leave a Reply

Your email address will not be published. Required fields are marked *