Is regret regrettable?

You often hear the phrase “I have no regrets” thrown around. It is a dishonest phrase of course, but even if it were true is that such a good thing? To regret something is to on reflection admit it was a bad decision. Ultimately, it means if I were to do it again I would definitely choose a different way. And in that I see a great sense of wisdom and humility.

If you have no regrets, do you A – not regret your decisions even though some of them were not great (i.e. regrettable) or B – because you are perfect and never make mistakes. If B is true, please get in touch! Most likely it is A, in which case, to have no regrets shows a certain lack of self remorse and self-empathy.

My point here really, is nothing new. I simply wish to say that reflecting on your actions is a positive step. If we just ignore what we have done in the past, we are likely to make the same mistakes, then get frustrated because – guess what – we have been here before.

Of course, there is a balance to be struck. Regretting every little mishap is not helpful, but even with larger mistakes, it is not skilful to be constantly regretful. Regret, can bring suffering to one’s life if they are not able to distance themselves from it. If we observe regret as a thought or an emotion we can still experience it, but we can also not identify with it. Further, it is important to realise that not everything is under our control, often we make the best decision, but for unforeseen or purely unavoidable circumstances, it does not turn out as expected. This we should not regret; it is unfair and unwise too.

Ultimately, I think regret should serve as a reminder. A simple way to keep us in check and try to minimise our errors. I hope I do not regret writing this post.


The awkward gap between the physical and digital world

Everyone loves to write or at least everyone prefers to write than to type. It might not be as secure or safe as its digital counterpart, but it’s easy and enjoyable. Despite iPads and Surfaces, nothing beats pen and paper. Will it ever? I presume so, but I don’t see the technology in existence today. There are three hurdles that we have yet to overcome:

  • Texture – Writing on glass is uncomfortable and unnatural, whereas, paper provides and ideal amount of friction.
  • Ease – Although, taking an iPad around with you isn’t much harder than carrying around a notepad, what about when you forget your iPad or its out of charge?
  • Nostalgia – This is definitely the smallest hurdle and one I see dying out if the first to hurdles are solved, but people thought typing would replace writing and it hasn’t.

I am hopeful that these hurdles can be straddled. The second one I see being the most challenging. How can you make something as accessible and easy as paper? Perhaps augmented reality has a role here in providing virtual paper. What makes this particularly difficult is that this is not simply an engineering challenge, it is one that requires a huge shift in many people’s lives.

No matter the technology, the act of writing will need to feel natural. If augmented reality is not fully able to provide this than perhaps in the interim, a hybrid solution will be required. Like the transition from the combustion engine to electric.

Of course we want the perfect solution, but we also want it fairly quickly. The situation now is incredibly awkward. As I write this blog post, I am actually writing it on a sheet of paper. Now I am transcribing it because no technology that exists today is great enough to decipher my handwriting. But even if scanning technologies could decipher the undecipherable, scanning is awkward because it requires every little thought or image that you want to write down to be processed.

I have not yet mentioned reading, which despite what you may think, is done more with a real book than an ebook (in the UK at least)[1]. Even with newspapers which are largely read online, their readers would often prefer a physical paper (cost excluded). Although the problem here has different hurdles.

Ease is not a problem – finding reading material on the web is far easier than finding it in print, even if you own a book you have to remember to carry it around with you. Nostalgia and texture play much bigger roles. Reading is about connecting to the writing which is much easier to do with another one of your senses involved (touch).

A waiting game I guess it is.

1. The Guardian – March 2017 –

A Poem is Poetry written by a Poet

In my not-so-well defined quest to behave more like an artist, I would like to dedicate this week’s post to poetry. And just like the topic of my previous post, I think that poetry is often misunderstood. The difference this time is that I feel like I am one of the ill informed.

I never cared for poems,
Never saw their use,
What good is writing
without being profuse?

As you can see, I don’t understand poetry, but I would really like to. Reading poems gives me great pleasure from the voice in my head that dedicates itself to this task. He reads with a beat, but not a static beat, one willing to change. I digress. I enjoy reading poems, but I don’t understand why or how. What are they for? Pure entertainment? Outbursts of genius? I don’t have a clue.

A man once said:
This is no song,
This is a poem,
Don’t get it so wrong

I must admit writing these little poems is good fun. I realise they are no good, but they are incredibly joyous to write.

I fell in love with poems,
Not a first sight,
But from Patterson,
The bus driver,
Who never pulled a fight

That last so-called poem is a reference to the brilliant film, Patterson. In which a bus driver named Patterson (played by Adam Driver) writes poems. It had an esoteric feel, but this gave it such charm that I fell in love with it and poetry.

I think that’s enough for this week. I suppose this week is like a homework assignment: write poems!

The Honeycomb film review model

Models. They are great. They let us break things down, get closer to what really matters and fully comprehend the situation. If only the world had more models.

Film reviewing has always remained a dark art to me. I’ve tried to review films and failed every time (without fail). I have concluded that film critique requires skill and practice, therefore I will leave it to the professionals. But as an engineer (studying), I love abstraction. Hence, my love for models.

Here I present my proposal for critique of a film, in an abstract manner, with something I call the Honeycomb film review model. It looks at the success of a film like a honeycomb. There are four somewhat discreet echelons.

1. A great, almost perfect film is a completed honeycomb with maybe a few hexagons lacking some honey. Examples: Dunkirk, Bridge of Spies

2. A dishevelled film lacking direction – one that would be better if it was completely remade. These films aren’t necessarily awful (although they can be), they might be good to watch but not feel complete. Examples: Avengers: Age of Ultron, Jurassic World

3. A good, fun film. Most people will like it, including the critics, but it’s not great. These films are a completed honeycomb, but not with the finest quality honey. Examples: Logan, Spiderman: Homecoming

4. This is the most variant group. The honeycomb is basically complete but a lot of the hexagons are half filled. These films are not necessarily awful (but can be), they just have some major flaws which can completely ruin the film or be overlooked. These are the type of films which critics dispute over. They evoke strong feelings, creating two poles – hate or love. Examples: The Lost World: Jurassic Park, Interstellar

The beauty of this model (if I may say so myself) is that it’s more objective than subjective. For instance, the fourth band is where opinions will differ most strongly. In the first two echelons, there will of course be disagreements on the quality of the film, but a consensus is likely to be reached.

One flaw I see immediately with this model is the lack of a place for films that critics love but viewers hate. Perhaps, they belong in the first group, but that could mean that this model has a bias towards critics opinions. Furthermore, films like the latest Transformers movie would firmly fall in the second echelon, yet you will get groups of people who would argue it belongs in at least band 3.

The Mac vs PC dilemma for a programmer Part 1: The problem that is Windows

Having recently started my Computer Science and Electronic degree. There was one thing that shocked me most: the ubiquity of Macs. Having no coding experience or knowledge, I had assumed that Macs were not for developing software. It turns out I was wrong. In fact, most people with programming experience (at my university) own and use a Mac. That’s not to say owning a Mac makes you a better programmer, but it does make your life a lot easier.

There are two problems that lead programmers to use a Mac:

The first problem is Windows. More specifically, the proprietary standards and procedures that Microsoft enforce in Windows. Installing C on Windows isn’t too difficult, but it requires more effort than a Unix OS like MacOS. Then you have to be careful of which version of Windows your program is compatible with. Windows has a lot of legacy support. Which is nice for companies that want to run insecure and slow 20 year old software, but incredibly inconvenient for programmers because you have to constantly be aware of the weird nuances that Windows has, but other operating systems do not. Installing SDL is far from a pain-free experience like it is on Unix.

Of course, this depends on what languages you use. Java for instance, is completely fine on Windows, comparable to Mac and Linux in every area.

This ultimately leads to a reliance on many workarounds. This is not good for a programmer who wants their program to compile and run on almost any machine.

It is now that we must bring up Linux. The saviour for Windows users who want to write programs! Well… not quite. Linux is far far far better than Windows for programming. Things work without workarounds and installing compilers and software is never more inconvenient than copying and pasting a few commands into terminal.

This leads to the second problem: Linux comes in many different flavours all with their own set of problems. Ubuntu, for example, the most Mac/Windows like distribution is ugly (IMO), and doesn’t support apps like Microsoft Word (without hours of messing about). This makes Linux great for programming, but inconvenient for everything else. Linux distributions are not designed in the same way as MacOS and Windows. MacOS and Windows have to appeal to a large market, that’s why they have to look fairly pretty, stay fairly consistent and provide adequate support. A Linux distribution does not have all of these goals. It may have some of them, for example Elementary OS focuses on aesthetics.

Ultimately, PC users usually have to stick with a dual boot. This is an inconvenience which can be detrimental to productivity. Having an OS that just works is something that I believe most programmers want, especially when working on mission critical projects.

Here we are, the solution: MacOS. It has all of the conveniences of Windows: all the standard software you need, a pretty layout and a consistent design. It also has the conveniences of Linux (as it is also built on Unix): easy installation of compilers and standards and no need for workarounds.

All seems well and good for Apple. However, in part 2 I will focus on hardware: the final crux of this problem.

Note: I was using a Windows PC until May of this year when I made the switch to a MacBook Pro 13″

The Post-PC era has yet to Arrive

In 2013, Tim Cook called this the post-PC era. But that hasn’t really turned out to be the case. The PC still stubbornly sticks around.

It certainly is true that the peak for the PC has past. PC sales (laptops and desktops) were greatest in 2011, and it has only declined since. But tablet sales have also declined, with their peak being only 2 years after that of the PC [statista]. The trend therefore, is clearly not that people are replacing their desktops and laptops for tablets. Instead, people are buying fewer of these devices year on year.

In truth, the tablet was never going to replace the PC. Despite what Apple (or its CEO) have tried to claim. (Microsoft claimed this before Apple, but it was clear that what Apple and Microsoft class as tablets are totally different.) And, I’ll be honest, I don’t understand why Tim Cook said that. He of all people must realise the unique opportunities the PC brings.

In a fairly recent statement, Cook said:

“The desktop is very strategic for us. It’s unique compared to the notebook because you can pack a lot more performance in a desktop — the largest screens, the most memory and storage, a greater variety of I/O, and fastest performance. So there are many different reasons why desktops are really important, and in some cases critical, to people.”


It seems that Cook did deliver on that promise just yesterday at WWDC, with a refresh of pretty much all of the Mac line-up (sorry Mac Mini), including the Mac Pro which we can assume will be replaced by the iMac Pro. So it would appear that Apple’s post-PC mentality was a fad, a blip in history, hastily to be forgotten. Just a reminder of that history:

Just before WWDC yesterday, the Mac Pro was over 3 years old (still technically is), the Mac Mini was over 2 years old (and terrible) (yeah, that’s still true), and the iMac hadn’t been updated since October 2015.

It’s always easy to take history out of context, so compare that to Apple’s previous record: the average refresh cycle for the iMac is 317 days compared to 460+ days since the last iMac before WWDC yesterday [MacRumors]. The Mac Pro also had a 3 year gap between the 2010 model and the 2013 trash can, but the 2010 model was future proofable making this gap less painful for many prosumers. So clearly, this was not normal Apple behaviour.

Apple also released a new iPad Pro and a slew of new features specifically for iOS on the iPad, bringing it more in line with the MacOS desktop. It’s clear that Apple sees a future with limited room for the PC, but for now the PC is very much part of the future.

Goodbye Wunderlist

It always seems to happen, Microsoft buys a small company that builds a really cool productivity app… and kills it. Sunrise took the bullet, now its Wunderlist’s turn. Whilst Wunderlist may may still be on the App Store, it’s just waiting to be pulled off. Microsoft have officially killed off Wunderlist, in favour their new app Microsoft To-Do.

To-Do is made by the same lot who made the Wunderlist we know and love, but there’s always something special about an app that isn’t in the grips of one of the big three. This isn’t just a case of supporting the underdog. There are benefits of using apps from the small guys:

  • they’re forced (with exceptions) to be multi-platform
  • they rely on their customers first (not shareholders)
  • they are (usually) easier to connect with

Thankfully, Microsoft are usually good at these three things, and Microsoft To-Do looks promising with Microsoft promising to implement the best of Wunderlist into To-Do. Once this happens, Wunderlist will be (officially) dead.

Why Microsoft chose to reinvent the wheel they had in their shed, I have no idea? And it seems ridiculous, but if you’re interested in that look here.

Wunderlist has been a go-to tool in my arsenal for over half a year, and a great one at that. I could continue using Wunderlist and eventually migrate to To-Do, but I have decided to use this opportunity to try something new for my to-do listing needs. So now I’m using Todoist. I may give my full thoughts after I’ve fully become acquainted to it.

I’ll miss you Wunderlist and the view you gave me of Fernsehturm.