The Honeycomb film review model

Models. They are great. They let us break things down, get closer to what really matters and fully comprehend the situation. If only the world had more models.

Film reviewing has always remained a dark art to me. I’ve tried to review films and failed every time (without fail). I have concluded that film critique requires skill and practice, therefore I will leave it to the professionals. But as an engineer (studying), I love abstraction. Hence, my love for models.

Here I present my proposal for critique of a film, in an abstract manner, with something I call the Honeycomb film review model. It looks at the success of a film like a honeycomb. There are four somewhat discreet echelons.

1. A great, almost perfect film is a completed honeycomb with maybe a few hexagons lacking some honey. Examples: Dunkirk, Bridge of Spies

2. A dishevelled film lacking direction – one that would be better if it was completely remade. These films aren’t necessarily awful (although they can be), they might be good to watch but not feel complete. Examples: Avengers: Age of Ultron, Jurassic World

3. A good, fun film. Most people will like it, including the critics, but it’s not great. These films are a completed honeycomb, but not with the finest quality honey. Examples: Logan, Spiderman: Homecoming

4. This is the most variant group. The honeycomb is basically complete but a lot of the hexagons are half filled. These films are not necessarily awful (but can be), they just have some major flaws which can completely ruin the film or be overlooked. These are the type of films which critics dispute over. They evoke strong feelings, creating two poles – hate or love. Examples: The Lost World: Jurassic Park, Interstellar

The beauty of this model (if I may say so myself) is that it’s more objective than subjective. For instance, the fourth band is where opinions will differ most strongly. In the first two echelons, there will of course be disagreements on the quality of the film, but a consensus is likely to be reached.

One flaw I see immediately with this model is the lack of a place for films that critics love but viewers hate. Perhaps, they belong in the first group, but that could mean that this model has a bias towards critics opinions. Furthermore, films like the latest Transformers movie would firmly fall in the second echelon, yet you will get groups of people who would argue it belongs in at least band 3.

The Mac vs PC dilemma for a programmer Part 1: The problem that is Windows

Having recently started my Computer Science and Electronic degree. There was one thing that shocked me most: the ubiquity of Macs. Having no coding experience or knowledge, I had assumed that Macs were not for developing software. It turns out I was wrong. In fact, most people with programming experience (at my university) own and use a Mac. That’s not to say owning a Mac makes you a better programmer, but it does make your life a lot easier.

There are two problems that lead programmers to use a Mac:

The first problem is Windows. More specifically, the proprietary standards and procedures that Microsoft enforce in Windows. Installing C on Windows isn’t too difficult, but it requires more effort than a Unix OS like MacOS. Then you have to be careful of which version of Windows your program is compatible with. Windows has a lot of legacy support. Which is nice for companies that want to run insecure and slow 20 year old software, but incredibly inconvenient for programmers because you have to constantly be aware of the weird nuances that Windows has, but other operating systems do not. Installing SDL is far from a pain-free experience like it is on Unix.

Of course, this depends on what languages you use. Java for instance, is completely fine on Windows, comparable to Mac and Linux in every area.

This ultimately leads to a reliance on many workarounds. This is not good for a programmer who wants their program to compile and run on almost any machine.

It is now that we must bring up Linux. The saviour for Windows users who want to write programs! Well… not quite. Linux is far far far better than Windows for programming. Things work without workarounds and installing compilers and software is never more inconvenient than copying and pasting a few commands into terminal.

This leads to the second problem: Linux comes in many different flavours all with their own set of problems. Ubuntu, for example, the most Mac/Windows like distribution is ugly (IMO), and doesn’t support apps like Microsoft Word (without hours of messing about). This makes Linux great for programming, but inconvenient for everything else. Linux distributions are not designed in the same way as MacOS and Windows. MacOS and Windows have to appeal to a large market, that’s why they have to look fairly pretty, stay fairly consistent and provide adequate support. A Linux distribution does not have all of these goals. It may have some of them, for example Elementary OS focuses on aesthetics.

Ultimately, PC users usually have to stick with a dual boot. This is an inconvenience which can be detrimental to productivity. Having an OS that just works is something that I believe most programmers want, especially when working on mission critical projects.

Here we are, the solution: MacOS. It has all of the conveniences of Windows: all the standard software you need, a pretty layout and a consistent design. It also has the conveniences of Linux (as it is also built on Unix): easy installation of compilers and standards and no need for workarounds.

All seems well and good for Apple. However, in part 2 I will focus on hardware: the final crux of this problem.

Note: I was using a Windows PC until May of this year when I made the switch to a MacBook Pro 13″

The Post-PC era has yet to Arrive

In 2013, Tim Cook called this the post-PC era. But that hasn’t really turned out to be the case. The PC still stubbornly sticks around.

It certainly is true that the peak for the PC has past. PC sales (laptops and desktops) were greatest in 2011, and it has only declined since. But tablet sales have also declined, with their peak being only 2 years after that of the PC [statista]. The trend therefore, is clearly not that people are replacing their desktops and laptops for tablets. Instead, people are buying fewer of these devices year on year.

In truth, the tablet was never going to replace the PC. Despite what Apple (or its CEO) have tried to claim. (Microsoft claimed this before Apple, but it was clear that what Apple and Microsoft class as tablets are totally different.) And, I’ll be honest, I don’t understand why Tim Cook said that. He of all people must realise the unique opportunities the PC brings.

In a fairly recent statement, Cook said:

“The desktop is very strategic for us. It’s unique compared to the notebook because you can pack a lot more performance in a desktop — the largest screens, the most memory and storage, a greater variety of I/O, and fastest performance. So there are many different reasons why desktops are really important, and in some cases critical, to people.”

[TechCrunch]

It seems that Cook did deliver on that promise just yesterday at WWDC, with a refresh of pretty much all of the Mac line-up (sorry Mac Mini), including the Mac Pro which we can assume will be replaced by the iMac Pro. So it would appear that Apple’s post-PC mentality was a fad, a blip in history, hastily to be forgotten. Just a reminder of that history:

Just before WWDC yesterday, the Mac Pro was over 3 years old (still technically is), the Mac Mini was over 2 years old (and terrible) (yeah, that’s still true), and the iMac hadn’t been updated since October 2015.

It’s always easy to take history out of context, so compare that to Apple’s previous record: the average refresh cycle for the iMac is 317 days compared to 460+ days since the last iMac before WWDC yesterday [MacRumors]. The Mac Pro also had a 3 year gap between the 2010 model and the 2013 trash can, but the 2010 model was future proofable making this gap less painful for many prosumers. So clearly, this was not normal Apple behaviour.

Apple also released a new iPad Pro and a slew of new features specifically for iOS on the iPad, bringing it more in line with the MacOS desktop. It’s clear that Apple sees a future with limited room for the PC, but for now the PC is very much part of the future.

Goodbye Wunderlist

It always seems to happen, Microsoft buys a small company that builds a really cool productivity app… and kills it. Sunrise took the bullet, now its Wunderlist’s turn. Whilst Wunderlist may may still be on the App Store, it’s just waiting to be pulled off. Microsoft have officially killed off Wunderlist, in favour their new app Microsoft To-Do.

To-Do is made by the same lot who made the Wunderlist we know and love, but there’s always something special about an app that isn’t in the grips of one of the big three. This isn’t just a case of supporting the underdog. There are benefits of using apps from the small guys:

  • they’re forced (with exceptions) to be multi-platform
  • they rely on their customers first (not shareholders)
  • they are (usually) easier to connect with

Thankfully, Microsoft are usually good at these three things, and Microsoft To-Do looks promising with Microsoft promising to implement the best of Wunderlist into To-Do. Once this happens, Wunderlist will be (officially) dead.

Why Microsoft chose to reinvent the wheel they had in their shed, I have no idea? And it seems ridiculous, but if you’re interested in that look here.

Wunderlist has been a go-to tool in my arsenal for over half a year, and a great one at that. I could continue using Wunderlist and eventually migrate to To-Do, but I have decided to use this opportunity to try something new for my to-do listing needs. So now I’m using Todoist. I may give my full thoughts after I’ve fully become acquainted to it.

I’ll miss you Wunderlist and the view you gave me of Fernsehturm.

Hardware/Software

Almost all modern consumer electronics is comprised of two parts: hardware and software. We’ve always had hardware, but software is something that has exploded only in the last few decades. And as software has exploded, it has proceeded many of the roles that hardware used to handle.

Of course, hardware is needed for software, what good is software without any way of seeing it or more importantly writing it? But eventually, I see hardware as becoming so subtle that it ceases to be important to the user. The software will be all we interact with, but even that will be ethereal. The benefit of this will be that we will no longer be using ‘devices’, ‘devices’ will just do things for us.

Currently, if we need to find something out, we go through this process:

  1. How will I find this? A phone call? Email? Message? Google search?
  2. Carry that action out and hopefully it works out
  3. If it doesn’t, rinse and repeat

In the future, this process could be as simple as, asking a question. Instantaneously, the computer analyses your questions and carries out these steps for you. For lack of a better word, it’s ‘frictionless’. This kind of technology may be a few decades in the pipeline, but in the near future, many of the barriers we currently face will be removed through two simultaneous processes: the prevalence of software and the dissipation of hardware.

An example of this through the prevalence of software, is Apple’s handoff feature, which allows a user to switch from their Mac to their iPhone/iPad (or vice versa) without even thinking about it. This might not seem like an obvious example of the dissipation of hardware, but by allowing this coherent transfer between devices, the device becomes less important.

In terms of hardware, the almost complete disappearance of bezels on the Xiaomi Mix removes much of the mental barrier when using a phone. It makes watching movies and playing games more immersive, but more importantly you forget that you’re holding a phone.

Virtual and augmented reality will only accelerate this inevitable seamless future, in which technology, does things for you, rather than you doing things with technology. On the other hand, it could be argued that it is us doing things for the technology. If you thought people being slaves to their smartphone screens was bad, imagine people with their VR headsets, not even seeing the real world they live in.

Ultimately, we will embrace the new technology as is always the case. However, what will decide whether its impact is positive or not, is how we choose to use it. It has the power to connect us with more people in a more profound way, as did Facebook when it first arrived. It’s all a matter of how people adopt and use the technology.

Why I quit Vipassana

Vipassana is a meditation technique, for more information see this Wikipedia article

After the brutal 10 days that started my quest of Vipassana I felt happier, more energetic and most importantly at peace with myself and as a result with others. I was a strong advocate of the course, recommending it to friends and family. I would say the 10 day course worked. It was a success.
But one course, despite it totaling over 100 hours (4 days) of pure meditation will not get you to the ultimate goal, like anything you must continue the practice in a sufficient way to not only maintain its benefits but also increase them.
S.N. Goenka recommends at minimum, 2 hours every day: 1 hour in the morning and 1 hour at night. It sounds like a lot because it is a lot. 2 hours every day over 2 weeks totals more than a whole day (1 and 1/6th days). Over a year, that amounts to 2 whole weeks. And if you practice Vipassana for 2 hours every day over 50 years that equals to 2 years of your life. It’s crazy.
And, yes, the benefits of Vipassana are definitely non-trivial, but 2 hours every day isn’t trivial either. Even if you can find the time, I find it incredibly difficult to stay motivated to continue practice (often, after 15 minutes my mind convinces me to stop wasting time and start doing something productive).
Continuity of practice is the key, says Mr Goenka himself. So practicing occasionally may provide benefits sporadically, but so will exercising daily. To feel the true benefits of Vipassana you need to commit a lot of time and effort. This isn’t one of those: “in the 21st century people don’t have time too…” statements, this is a “unless you feel the benefits immediately, 2 hours a day is impossible”.
Maybe, I’m doing something wrong. Maybe, I’m playing ‘the sensation game’ (Mr Goenka says that many people who keep coming to courses and still feel little benefit are playing a game with their sensations instead of just observing them). I honestly don’t know. However, this technique is meant to be universal so there must be something I can do to keep it up.

Calculations
2 hours a day = 1 and 1/6 days every 2 weeks
= 2 weeks a year
= 2 years over 50 years

Apple’s naming convention still doesn’t make sense

At Apple’s annual developer conference, WWDC, Apple retired a name they’ve been using for 15 years. Unlike OS 9s funeral, OS Xs wasn’t so dramatic and for good reason: it’s not a big change. OS X is now becoming macOS to fit more in line with iOS, watchOS and tvOS. The problem is, it still doesn’t make sense.

That looks wrong:

macOS

That’s better:

OS X

Ooh, so pretty and streamlined. Until you look closer.

  • iOS 10
  • macOS Sierra
  • watchOS 3
  • tvOS

That doesn’t look right either. It’s a confusing mess, iOS and watchOS are arithmetically assigned, whereas macOS has a semi-random name. It’s really not a big deal, but I will say this: I can’t imagine Steve Jobs allowing this.