Thursday, December 27, 2012

Half-Life 2 episodes

Never mind HL2 Episode 3 becoming a new Duke Nukem Forever... I'm not here to talk about that.

I'm here to talk about the naming of the trilogy. I have never understood it.

After the success of Half-Life 2, Valve started developing a direct continuation to it, in the form of smaller games in an "episodic" format. The original intention was that instead of spending two or three years to make one big game, they would publish smaller games at a faster pace. Of course we all know what happened to those plans (while Episode 1 was slightly delayed, it was more or less in par with the plans; Episode 2 was way, way delayed; but neither compares to the ridiculous delay with the third one) but as said, I'm not here to talk about the delays.

From almost the very beginning Valve announced that when the three episodes got published, they would effectively constitute Half-Life 3. So it puzzles me to no end why exactly they are named "Half-Life 2 Episode 1" etc. when they should be quite obviously called "Half-Life 3 Episode 1" etc. They make up HL3, so it would only make sense to call them that. The first game is the "first episode" of HL3, and so on, so why not name it as it is?

Naming them "Half-Life 2 Episode X" only caused confusion. In fact, even today it still causes some confusion among many people who are not so hardcore players (and might not have played the episodes.) They do not really know what the name implies. It sounds like some kind of remake, or even prequel, to Half-Life 2. In fact, the naming would really make a lot more sense if they were prequels.

That naming scheme makes absolutely no sense for a direct continuation to the story.

Wednesday, December 26, 2012

Air guitar

There are many hobbies that people engage in, and that do not require much talent or expertise. The main point is that they are fun. In some cases these hobbies may be fun to watch too.

Air guitar is not, in my opinion, of the latter kind.

It may be fun to prance around in the rhythm of a groovy guitar solo and pretend to play an imaginary guitar, but watching it is not very fun. It actually looks ridiculous.

Pretending to play a guitar is not a talent, it does not show anything interesting, and in fact it looks ridiculous and obscene. At least with a real guitar it doesn't look like you are playing with yourself, but without it... It just looks horrible, and I really don't want to watch that.

What's worse, some people take it way too seriously. Heck, there are world championships of air guitar. WTF?

People argue that it's all about the show. Fine, but if you want a dance show, then make it a dance show, not a pretend-guitar-playing show that doesn't require talent, looks ridiculous and, what's worse, obscene.

I don't want to watch that. Give me either a dance show or a real guitar shredding show, not this.

Tuesday, December 25, 2012

Music legends get a free pass

In the world of music, there have been and are really extraordinary talented performers who have contributed more than just music and songs, but have helped pioneer entire styles and genres. They are truly music legends, the people who set the standards, who not only composed and performed individual songs, but who developed music styles further and even invented entirely new forms.

Such legendary musicians often get such a cult status that they get a free pass on everything they have done, even if it doesn't really compare all that well to later developments. Criticizing their work, comparing it as inferior to later works, seems tantamount to blasphemy.

To take one particular example, consider the song Unchain My Heart, originally performed by Ray Charles, and later covered by Joe Cocker.

Ray Charles enjoys such a legendary cult status. He was certainly a pioneer of soul music, and helped define an entire new genre. Thus it's no wonder that many people will say things like "yeah, Cocker's cover of the song is good, but I like the original Ray Charles' version better."

I honestly don't understand this. I have listened to both versions, and I just can't help but consider Cocker's cover better in all possible counts. The arrangement is better, the tempo is better, the instruments are better, it's groovier, Cocker's singing voice is better, even the backing vocalists are better. I can't find a single thing that I would consider better in the original Ray Charles version.

The fact is, if Cocker's version had been original (ie. Ray Charles had never performed it), and later someone else came up with a cover that's exactly like the Ray Charles version, every single person in the universe would consider it significantly worse. However, since it's Ray Charles who performed the original, he automatically gets a free pass and is beyond all criticism. His version is automatically "better", no matter how weak in comparison.

No disrespect to Ray Charles, but I think people should be slightly more objective, stop putting people in such pedestals, and not give performances a special status and exceptions simply because a musical legend was the first to preform them.

Tuesday, December 11, 2012

Nancy Drew: Curse of Blackmoor Manor

This is not something that grinds my gears. On the contrary, the video game Nancy Drew: Curse of Blackmoor Manor is in fact a little gem.

I bought this game on Steam because they were having a sale, and it cost just a few euros. At first I was disappointed, but then it became quite engaging.

The thing about this game is that probably at least 90% of gamers nowadays would not like it. It's way, way too difficult for casual gamers, and most HC gamers would probably get turned off by it because it feels technologically so antiquated. This game was released in 2004 for Windows, yet feels like a DOS game from the early 90's. It consists of pre-rendered still images and short FMVs, with no sprites, 3D models or anything else. Even its screen resolution is fixed at 640x480. In fact, if you were to remove the voice samples, it could quite well pass for a DOS game from 1994 rather than from 2004.

And as said, most casual gamers wouldn't like it either because it's so damn difficult. How difficult? Think of Myst. Really hard, obscure puzzles that sometimes even require writing things on physical paper (or, as I did, take screenshots with your cellphone camera so that you can consult them while playing.) Hints are often subtle and quite hard to get. In a few cases you have to more guess because it's not explicitly and unambiguously hinted.

Yet every single puzzle, as difficult as they are, is actually solvable by deducing from the hints you get. There aren't, in fact, any puzzles where you have to just blindly try all possible combinations or anything like that, as long as you realize which other detail in the game was actually a hint for the solution.

The thing is, it can be really difficult, but it's quite rewarding when you finally figure it out. In fact, I set to finish the game without reading any walkthrough or anything. Just by playing. And I succeeded. Game finished, I solved every single puzzle myself, and properly at that. (Steam recorded my total playing time as 20 hours, although part of that was spent on the pause screen.)

I think this is a little gem of a game which unfortunately not many people will appreciate, especially nowadays, where everything has to be either really casual, or more action-driven.

Friday, November 23, 2012

Newspapers vs. new media

Newspapers and the press have hundreds and hundreds of years of history, and have had a big impact on the society during all of it (for good and bad alike.) For hundreds of years newspapers have thrived, and were a staple of any society. This is because for a long, long time they were basically the only medium that people had to get information about current events (be it local, national or international) and people thirst for this kind of information.

In the past 50 years or so TV has kind of become a big competitor to newspapers, but never really supplanted them.

However, during the past decade or two a new form of media has become so big and prevalent that it actually has turned into an almost newspaper killer: The internet (often colloquially called "new media.")

Traditional newspapers have struggled for a decade or two to adapt. Physical newspapers are selling less and less because it just is easier for people nowadays to search for information on the internet, usually for free, than to buy a newspaper. Traditional newspapers have tried to transit to the internet, some with more, some with less success.

One of the biggest mistakes that many such newspapers do is this: Online articles would be quite a useful and valuable resource for many people, especially if they can be referred to. These can be important sources of information of past events, and they could become referred by other articles. The internet could be a very handy and useful, easy to use and free-for-all archive of newspaper publications (something that in the past required one to go to a library that offers such a service eg. in the form of microfilms of scanned newspapers, and making tedious manual searches using microfilm viewing devices.) However, many newspapers only publish some articles, and often they do it temporarily, removing them from public access after a time. (Either they remove them completely, or they make accessing them non free.)

And then they wonder why their readership is decreasing. The one thing that would actually increase the amount of visitors is the one thing that they often avoid (ie. keeping all articles accessible forever.)

The smarter newspapers keep their online articles available forever, but not all of them are that smart.

Monday, November 19, 2012

Microsoft's greed with the Xbox 360

Netflix is, basically, an online video rental service. It's available for a surprisingly large number of platforms besides just desktop PCs. Other such platforms include various Android-based cellphones and tablets, the iPhone, iPod and iPad, Apple TV, various Blu-Ray Disc players and several gaming consoles, including the Nintendo Wii, 3DS, Sony Playstation 3, PlayStation Vita, and the Xbox 360.

There's one thing in common with all of them: Netflix can be used in all of them without any additional cost from the part of the platform's manufacturer.

Except for Microsoft's Xbox 360.

From the literally hundreds of different platforms that support Netflix, the Xbox 360 is the only one where it cannot be used without paying additional money to the platform's manufacturer (in this case Microsoft.) An Xbox Live Gold subscription is needed to use Netflix.

One could argue that this subscription is needed to account for costs from Microsoft's part. I don't know if running Netflix requires anything at all from Microsoft (eg. if it uses anything from a Microsoft's server), but it sounds spurious because for example neither Sony, Nintendo or Apple require any such additional costs to use Netflix. It just sounds like yet another way Microsoft uses to pressure people into paying the monthly subscription fees, at the cost of a third-party, and without any actual concrete reason for that (such as server maintenance costs or the like.)

I wouldn't be surprised if Netflix weren't very happy with this situation. Microsoft is using their service to entice people into paying monthly fees to Microsoft (while other platforms do no such thing.)

It can get quite egregious at times. Recently Microsoft added Internet Explorer to the Xbox 360, which means that you can now surf the web with the console. Well, guess if it can be used if you don't have the Xbox Live Gold subscription. And why exactly is it needed? It's not like the browser needs anything from a Microsoft server any more than the same browser in your PC. (And even if it did, I'm quite sure Microsoft could afford it. After all, the console already downloads tons of things from the Xbox Live servers even without the Gold subscription.)

I'm surprised that they don't limit the ability to actually buy games from Xbox Live only to those with the Gold subscription. I suppose that in this case it would have been more of a loss than a gain (not to talk about the backlash from game vendors.)

Tuesday, November 6, 2012

Scientific institutions should know better

I had a bookmark to a YouTube video where Neil DeGrasse Tyson talks about UFOs and all the argumentative fallacies regarding them. That person is one of the smartest people alive, I really admire him, and I think that he has done an astonishingly good job at popularizing science and getting some rationality to the public knowledge amidst the widespread of irrational superstition.

Today I went to watch that again because it's just such a great video. What do I encounter there?

"This video is no longer available due to a copyright claim by St. Petersburg College."

This is really worthy of a facepalm. Double facepalm. How stupid can these people be? How completely and utterly stupid?

St. Petersburg College, you are not helping the spreading of science and rationality with stunts like this. You are only doing the exact opposite. You are doing the exact same thing as the irrational fanatics are doing.

This makes no sense. This kind of material that helps humanity should be free and public domain. Nobody should have such rights to knowledge and rationality. Nobody should have the right to shut down the broadcasting of this. Making it open and free is only a service to humanity.

Monday, October 29, 2012

People are really bad at grasping probabilities

Oftentimes the human mind works in rather curious ways. For example, let's assume this hypothetical situation:

  • A new flu pandemic has appeared that's especially nasty. About 1% of all people who contract it will die. (This is not unrealistic because such flu pandemics have happened, at even higher mortality rates.) You are pretty much guaranteed to get the flu unless you live a really isolated life.
  • A vaccine is developed that prevents contracting the flu completely.
  • Later it's discovered that approximately 0.1% of people get a serious chronic disease from the vaccine.
What happens in this situation? A significant amount of people will refuse to take the vaccine, instead opting for the 1% probability of dying. It doesn't matter how much you explain the probabilities to them, they won't budge. But why?

The highly contradictory reaction that many people have to this becomes even clearer if we assume two alternative hypothetical situations:
  1. Instead of the vaccine giving you a chronic disease (with a 0.1% probability) it simply is slightly ineffective in that it simply reduces the probability of contracting the flu to 10%. This means that if you take the vaccine, your probability of dying from the flu drops from 1% to 0.1%.
  2. The vaccine protects you from the flu completely, but it has a 0.1% probability of killing you outright.
In the first situation most people wouldn't have a problem in taking the vaccine, but in the second situation a significant amount of people would refuse the vaccine.

If you think about it, that kind of thinking makes no sense. The two situations are completely identical: If you don't take the vaccine you have a 1% chance of dying, and if you take it, you have a 0.1% probability of dying. It doesn't matter what exactly is it that kills you, the probability is still exactly the same. Yet in the first case people would not have a problem taking the vaccine but in the second case they do.

Why do so many people prefer taking a 1% chance of dying instead of taking a 0.1% chance of dying (or even just getting a disease) in one situation but not the other?

Also, in the original hypothetical situation some people could argue that they would prefer death over a chronic disease. However, that explanation doesn't make sense because they are effectively ready to commit suicide by not taking the vaccine (with a 1% probability), but not if they take the vaccine and get the disease. Such people are ready to die from the flu (which is effectively suicide) but not ready to commit suicide if they contract the disease from the vaccine (even though the chances of having to commit it are much smaller.) Why? What's the difference?

This is, I believe, caused by a very primitive underlying psychological phenomenon related to a very instinctive aversion to assigning fault and blame to oneself. It's the difference between harming oneself actively vs. harming oneself passively, by inaction.

In other words, "if I die from the flu, it's not something that I actively did to myself, therefore it's not my fault." However, "if I contract a disease or die from the vaccine, it will be because of something I actively did to myself, and therefore it's my own fault."

There's a curious, deep-ingrained need in the human psyche to avoid doing anything that can be blamed on oneself. It's preferable to take a higher risk of self-harm by inaction ("not my fault") than a smaller risk of self-harm by actively doing something ("it's my fault".)

Deep down, when we get to the very bottom of the psychology behind it, it's all about shame, and avoiding it at all costs. ("It's more shameful if I harm myself via something I did than if I receive some harm by something that I didn't do to myself directly.")

This, of course, doesn't make much sense, and in these situations it's quite detrimental. It causes people to take unneeded risks that could be avoided or greatly diminished, and they can't even fully explain why they are taking these unneeded risks. And no amount of explaining probabilities is going to help. (Of course most people will come up with excuses, but they don't even themselves understand the true underlying psychological reason for their behavior. In fact, some people go to incredible lengths to rationalize these excuses, up to a point of it becoming outright ridiculous.)

This is precisely the reason why certain claims, for example about vaccines, trump any amount of actual scientific research. For instance, you can have hundreds and hundreds of extensive clinical trials with thousands and thousands of test subjects, hundreds of published and peer-reviewed papers, and the basically unanimous agreement of the scientific community that vaccines are safe... and all that's needed to trump all that is a couple of nobodys on the internet claiming that vaccines are dangerous. They don't need actual evidence, they don't need actual research, they don't need actual clinical trials. The mere claim that they are dangerous is enough to completely nullify the enormous amount of actual research done by actual scientists.

Wednesday, October 24, 2012

Stephen King's supernatural stories

I like Stephen King's novels like anybody else. He really is a master at writing in a really interesting and engaging manner. However, there's one aspect of many of his books that I don't like that much.

Please don't misunderstand. I don't have any problem in supernatural elements being used in fiction, as long as it's interesting and support the fictitious reality of the story. In fact, quite many of King's books containing supernatural elements are just fine, when it's precisely the supernatural element that's the very core of the story. (Classic examples of this include Christine, Firestarter and Pet Sematary.)

However, in quite many other of his books the supernatural elements seem completely out of place, and artificially tacked on. They feel like not belonging to the otherwise interesting story, and they actually detract from it.

King is a master of writing suspense. Many of his novels start in a very suspenseful and mysterious manner, yet remaining in the realm of natural events. They often involve strange behavior by people, or even outright crazy people (who often appear normal at first, but then reveal their true personality.)

But then something supernatural is revealed behind the scenes. (Many times this person or persons were either driven crazy by the supernatural phenomenon, or are themselves actually not normal humans.) This turn of events more often than not feels completely unnecessary, and detracts from what otherwise promised to be an interesting thriller.

To give a classical example of a novel where the supernatural elements are completely unnecessary, take The Shining. This novel could have perfectly well been written without any supernatural elements, as a pure psychological thriller, and it would have worked exactly as well, if not even better. (Of course delusions experienced by Jack Torrance could be described, as long as they are kept just as that: delusions that did not really happen.) The supernatural elements feel artificial, redundant and tacked on. Completely unnecessary.

Thursday, October 18, 2012

Scams that cannot be stopped

The world is full of people who believe in all kinds of irrational supernatural ideas, such as the paranormal, the "spiritual world", the supernatural powers of the human mind (that only wait for them to be unleashed via proper training), and so on. Well, people are (and of course should be) free to believe whatever they want.

The problem is, lots of other people are cashing in on this psychological phenomenon. In the same way as many people believe in such things, there are others who are willing to sell them such beliefs for money.

Just here in Finland, which should otherwise consist of relatively highly-educated civilized people, there exist several organizations that sell books and other material, and offer "training courses" related to the supernatural, the paranormal, and all kinds of such nonsense. And they are not doing it for free either.

The thing is, these organizations and people are using their websites to advertise their material, and these advertisements offer many promises that obviously cannot be fulfilled. For example, one such website directly promises that by buying their material and following their training course the participant can learn such things as mind-reading, telekinesis and levitation. It's not even something that the website hints at or makes only vague references to. It's something that it directly promises. The prices for these "courses" are in the order of hundreds of euros.

This is, basically, misleading and false advertising, which is illegal here (and most other countries as well.) Yet nothing can be done about it.

There exists a consumer rights organization in Finland where consumers can complain about consumer rights violations, as well as things like illegal marketing techniques and false advertising. However, they have limited resources and have to prioritize cases, as there are way more complaints than they can handle. Scammers selling material and making all kinds of false promises is not enough of a high-priority problem, compared to other, more pressing cases.

Therefore such scammers get to keep selling their stuff, organizing outrageously expensive courses where they teach all kinds of nonsense (one of the best ones I have seen teaches about the unicorns of Atlantis; I'm not kidding here, they are teaching it seriously; and the price for a 1-day course is 80 euros) and promising all kinds of things that they can't deliver. And there's nothing that can be done about it.

Wednesday, October 17, 2012

Flat earthers

Apparently there are people who really, honestly think that the Earth is a flat disc, and that all we know about astronomy is just a bunch of lies (perpetuated by a gigantic, world-wide conspiracy.) And I don't mean just a few lunatics rambling in their basements, but a relatively substantial amount of people who take it seriously and actually try to rationalize it. They have websites, forums, books and "documentary" films on the subject.

The funny thing about it is that when you read their web pages and online forums, and watch their videos, it's really, really hard to tell if they are being serious or if it's just a parody. (Poe's law is in full effect here.) However, apparently at least some of those people are really being serious about it.

What's even funnier is seeing how they have to struggle to argue their position. As more and more undeniable evidence has come forth during history, they have to keep changing their arguments.

In the distant past flat earthism was much simpler to believe: It was believed by many that the Earth was just a flat disc and that the Sun and the Moon just orbited around it, so that half the time they would be above the disc, and half the time below it. This was believable to them because they didn't have experience on the vast size of the Earth.

However, this model poses a serious problem: It's undeniable that day and night do not happen at the same time on the entirety of Earth. At some parts there's day while at some other parts there's night, at the same time. This is such an utterly undeniable fact that basically no flat earther even tries to deny it. Therefore they have had to come up with something else to explain it.

They thus invented the concept that the Sun and the Moon are in fact spotlights that are floating above the Earth in circular paths, at a relatively low altitude of a few tens of kilometers. (AFAIK they don't even attempt to explain how exactly this works. Basically it works by magic. And of course the hundreds of thousands of astrophysicists around the world are all in a huge conspiracy to keep this secret.)

Of course that still doesn't work: If this were indeed the scenario, then a "sunset" would consist of the Sun getting smaller and dimmer as it moves away from the observer until it effectively "turns off" (when the observer gets out of the spotlight's penumbra.) Instead, what we have is an unchanging Sun that clearly goes down and descends beyond the horizon (from our perspective.) The Sun doesn't "turn off", it goes below the horizon, quite clearly and visibly. The "spotlight that's hovering above a flat Earth in a circular motion" simply can't explain this. Yet the flat earthers insist that it does, and stubborningly deny the plain geometric contradiction between this alleged system and what we observe, no matter how clearly you try to explain it to them.

Over a hundred years ago it was also easier to believe that nobody had actually visited the Antarctica and especially the south pole. Only a few people had allegedly visited there, and they were just claims that some people had made and that some newspapers had published... They could all just be unfounded rumors or outright lies (from people who wanted to get famous by alleging having done something they didn't actually do.)

However, in the modern world it's harder and harder to argue that the south pole is unreachable, and that what's known as Antarctica is actually a gigantic wall of ice at the edge of the Earth (as many flat earthers like to claim.) Thousands and thousands of everyday people have visited Antarctica and even the south pole, the Antarctic continent is an active site of research for many independent countries, lots and lots of independent documentary makers have made television documentaries about Antarctica, and so on and so forth. The amount of everyday people who have been there is just enormous. As time passes, it's exceedingly implausible to keep claiming that there's a huge conspiracy to keep people from going to Antarctica.

Of course that doesn't stop the flat earthers from making such claims even today. They do.

The handwaving gets pretty wild when we start talking about distances. For example a plane flying from South Africa to the southern parts of South America takes significantly less time than what the flat Earth model would suggest. Flat earthers have a hard time explaining this away. (The conspiracy theories usually get pretty wild at this point.)

Of course the most damning phenomenon with respect to the flat Earth hypothesis is the position and motion of stars. Whenever you are on the northern hemisphere, you will see on the sky the North Star in the constellation of Ursa Minor, and if you observe it for a day you will see that the constellation and all stars rotate around it. Moreover, if you eg. sail from northern Europe to northern America, you will constantly see it.

Likewise wherever you are in the southern hemisphere, when you look away from the north, you will see the Southern Cross, and if you observe it for a day, you will see how it and all stars rotate around it. If you sail eg. from Australia to South America, you can see it all the way through. Moreover, if you sail from the northern hemisphere towards the southern, you will see how the North Star descends beyond the horizon as you approach the equator, and then how the Southern Cross raises from beyond the horizon from the opposite direction.

All this is trivially explainable if the Earth is a rotating sphere with stars surrounding it, but a geometrical impossibility if the Earth is a flat disc.

There's a variant of the typical flat earther that actually doesn't try to deny that the Sun looks like it's descending below the horizon, that ships that sail away from the shore look like they descend below the horizon, or even that we have achieved spaceflight and have photographed the Earth from space. Instead, they try to conjure General Relativity into the mix to explain all this.

You see, according to them, General Relativity predicts that light bends near massive objects. Therefore when the light reflects from a receding seaship, it bends down. Therefore from the shore it looks like the sea and the ship with it is going down, which is why it looks like it's descending behind the horizon. The same with sunsets. Also the photographs taken from space are really photographing a flat earth; it's just that light bending makes it look like a sphere instead.

The major problem with this pseudoscientific explanation is not that the numbers are way off (in order for light to bend that much, the density of Earth would have to approach that of a neutron star.) No, that's not the major problem. The major problem is that they have got the effects of the light bending backwards.

If light coming from the seaship (and the sea itself) bends down, when looking at it from the shore it would look like the sea is actually curving up (and thus that the ship is climbing upwards.) From the shore it would look like you are on the bottom of a bowl, with all the landscape around you curving up. Likewise when photographed from space, it would look like you were photographing the inside of a bowl. It would not look like you were photographing a sphere. It's just hilarious how utterly wrong they understand this.

In short, reading about how flat earthers struggle to explain away all the facts is quite entertaining.

Monday, October 15, 2012

Skepticism and closed-mindedness (cont.)

In my previous post I talked a bit about the high standards of evidence that skeptics demand before believing in something. A bit more on that subject:

As I said in the previous post, evidence is valid only if it passes the rigorous test of science and peer reviewing. However, there's still another aspect of this that also has to be considered: Just because the evidence has been verified as valid, that still doesn't actually tell us what it's evidence of. We should always be cautious to avoid jumping to conclusions even if evidence turns out to be completely valid. The next big question should be: "What exactly is this evidence of? What's actually the cause behind it?"

Let's take an example: There's undeniable, verifiable and repeatable evidence that stars in galaxies rotate at orbital velocities that do not match the apparent mass of those galaxies. Normally the orbital velocity of a star should diminish the farther away it's from the center of the galaxy, so that stars far from the center move at a much slower velocity than stars close to the center. However, the orbital velocity of stars in galaxies is almost the same for most of them.

This is scientifically valid evidence. It can be observed and measured, and these tests can be repeated over and over by independent, unbiased parties. Thus there's no doubt that the phenomenon exists.

However, the big question is: What causes this? Scientists do not jump to conclusions. Hypotheses are presented, but until those hypotheses can be verified with actual observation, measurements and testing, they are not accepted as valid.

In other fields many believers in the supernatural, conspiracy theories, etc. succumb to a kind of circular reasoning. When faced with evidence that seems valid, they will offer a cause or explanation for it, even though it might not be warranted. When asked to justify the claim, they will present that evidence itself to justify it. This is just circular reasoning.

To understand what I mean, assume that someone said "the orbital velocities of stars in galaxies is caused by unseen dark matter". Then someone asks: "How do you know?" That someone says back: "Well, you can measure the speed of stars in galaxies. They have all the same orbital velocity."

You see how this doesn't work? If you present an explanation for observed evidence, and then back up your claim with the very same evidence, you are not getting anywhere. It's a circular argument. The evidence shows that the phenomenon happens, not why it happens. It's the same as "God created life." "How do you know?" "Well, life exists, doesn't it? There you go, undeniable evidence."

Or like in a Monty Python sketch: "What are you doing to that woman?" "We are going to hang her because she's a witch." "How do you know she's a witch?" "Well, we wouldn't be hanging her if she weren't, now would we?"

However, that's exactly what the believers and conspiracy theorists do. And they don't even realize it.

Skepticism and closed-mindedness

There's a really widespread misconception, both in real life and in fiction, that skepticism means "the conviction that everything must have a natural explanation." The common picture of a skeptic is a stubborn old fart who denies anything seemingly supernatural out of principle, and refuses to even consider any possibilities.

No, that's not what skepticism means. One narrow definition of the term in colloquial language might have that meaning, but that's not what it means in terms of the philosophy of science. What skepticism means is "not accepting extraordinary claims at face value without valid evidence". Personally I would also add to that "and passing the rigorous test of science".

Skepticism is not about denying some explanations on principle while accepting others just because they are more "sciency" and "natural." Skepticism does not, in fact, have any preconceptions on what the true explanation for something is. Skepticism is simply having more or less strict standards of evidence before a claim is accepted (regardless of what the ultimate explanation turns out to be.)

Someone saying "this must have a natural explanation" is not a skeptic (in the proper sense.) That's because that person is biased and has a preconception. That person is not demanding valid evidence, but jumping to a conclusion. That's not what skepticism is about.

The fact that so far pretty much everything happens to have natural causes, and thus most skeptics draw a conclusion from this, is just a side-effect. In general, making the assumption that something is a natural phenomenon is a much safer bet than making the opposite assumption. In that sense you are much less likely to be deluded into believing falsities. However, that's not what makes you a skeptic. That's just a side-effect of being one.

Many people who want to believe in the supernatural, or in UFOs, or cryptozoology, or the paranormal, or conspiracy theories, or who are denialists of some sort, often accuse skeptics of being "closed minded."

They are making this very mistake. They think that skeptics deny claims and (alleged) evidence out of principle and stubbornness. They are incapable of accepting that (true) skeptics do not deny any of it out of principle or stubbornness. Instead, they are applying high standards of evidence for extraordinary claims. Evidence of an extraordinary claim must be valid, it must be verifiable, and it must pass the rigorous test of science. Else it's not worth much.

These people seem incapable of understanding the concept that just the mere existence of what they perceive as "compelling evidence" is not worth anything. You can have as much compelling evidence as you want, but all by itself it's quite useless. For the evidence to be useful, it has to be examined, observed, tested and experimented on, without bias, repeatedly and by unbiased independent parties, in order to verify its validity. Preferably, publications should be made and put to the peer reviewing test. Just because something sounds plausible doesn't automatically make it so. A lot of things sound plausible while being completely invalid. If science were driven solely by what sounds plausible, we wouldn't advance much.

So it doesn't matter how much allegedly "compelling evidence" you have: If it hasn't been tested and verified to be valid, it's not worth much. That's not being closed-minded. That's being practical. Demanding high standards of evidence just works.

If "open-minded" means, from these people's point of view, accepting claims just because they sound plausible and have "tons of compelling evidence", they will be misled and deluded. Unfortunately the world doesn't work like this. The human mind is a master at fooling itself into believing all kinds of falsities. "It sound plausible to me" is one of the biggest mistakes a person could ever make as an argument to believe an extraordinary claim.

Sunday, October 14, 2012

Most common mistakes in zombie movies

While not the first zombie movies ever made, the original "zombie trilogy" by George A. Romero is by far the most influential set of movies of the genre in all of movie history. They popularized zombie movies, and they set all the major "rules" of zombies. And on top of that, they are really good movies.

Perhaps the best characteristic of these movies is the extreme realism they depict. Of course I'm not talking about the very existence itself of zombies in the movies' universe, but everything else. The only "supernatural" thing depicted in the movies is that the bodies of dead people, for an unknown reason, get reanimated. The bodies are still dead, they just move and have some minimal brain functions. Everything else follows very physically plausible laws, such as:

  • The bodies decompose over time due to natural processes, because there's little to no immune system fighting the micro-organisms that consume dead meat. This can be clearly seen as the trilogy progresses (with the zombies in the first movie being only slightly decomposed, and by the third one they are in a very advanced state of decomposition.)
  • The zombies move and act like a human would act when their brain capacity is been greatly diminished. Only some basic instincts are left. The zombies do not gain any special superhuman powers from having been reanimated (other than their bodies keeping moving even after a massive amount of damage or decomposition.)
Another great aspect about the movies is that they do not even try to "explain" why dead bodies are reanimating. They don't need to. That's completely inconsequential, and in fact makes the movies scarier and more enticing. It also works from a storytelling point of view: Why spout unscientific technobabble when you can just as well leave it as an unexplained mystery. It works much better that way.

Many, many other movies try to cash on the genre. Some are acceptable, some are just horrendous.

The most common mistakes that such zombie ripoff movies make, that usually end up just ruining the movie and not making it better, are:
  • Try to explain the origin of the zombies. Why do they not understand that a good zombie movie does not need any such explanation? That it works better if it's left as a mystery. Any technobabble excuse you can come up with will only be worse than leaving it unexplained.
  • Limit the "outbreak" to a small, contained place, and have the authorities try to keep it from spreading. While this can work with some types of "zombie" movies, it's usually a bad idea. It just makes it into another "disease outbreak" movie, just a much less plausible one.
  • Speed up the process to a supernatural degree. In other words, make people convert to the rotten form of zombies way too fast, much faster than what flesh decomposes naturally. The worst examples I have seen made a normal person into a completely decomposed zombie in mere seconds! No, just no. The zombies in the original trilogy decomposed slowly, and it worked. There's just no need to "speed things up".
  • Give the zombies supernatural powers. This doesn't make them scarier, it makes them just ridiculous. A running zombie is still ok (as long as it runs at a normal human speed), but anything superhuman is just ridiculous. The worst I have seen had "zombies" that could crawl on walls and ceilings, and were significantly stronger than normal humans. This just doesn't work, and it's one of the stupidest ideas for a zombie movie ever.
  • Make killing them completely implausible and/or supernatural. This includes either making them ridiculously hard to kill, and especially making them ridiculously easy to kill (even easier than a normal human being.) The most egregious example I have seen was a zombie movie where the zombies exploded into a cloud of ash when killed with fire. I'm not kidding here. It was exactly as ridiculous as it sounds.
In short: Don't try to explain what causes zombification, it just doesn't work. Make the zombies act like dead bodies that move and have some minimal brain functionality, no more, no less. Try to keep it as realistic and natural as possible (other than the fact that dead bodies are reanimated for some unexplained reason.) Most importantly, do not go the route of giving them superhuman/supernatural powers or characteristics. None at all. It just doesn't work, and it doesn't make them scarier. It just makes them sillier.

Saturday, October 13, 2012

Some thoughts about "The Cabin in the Woods" (2012)

This isn't something that grinds my gears, but hey, it's a blog...

Major spoilers ahead, so if you haven't seen the movie and want to see it, don't read this.

Just saw the movie The Cabin in the Woods, and immediately after, I thought to myself: "This would have actually been a lot better if they had removed all the scenes of the control room up to the point when the stonehead discovers the camera hidden in the lamp. After that it could have been almost exactly as it was (perhaps with a few added control room scenes.) It would have been a great twist that so far the movie seems like a regular horror/slasher, which suddenly turns into something else entirely, that there's more going on than first met the eye."

That would indeed have been a great twist... but then I thought: Maybe it would have actually been too cliché of a twist? I mean, it would have been a subversion of the "traditional" horror/slasher, but in the end, perhaps it would have felt way too artificial of an attempt to subvert it. Perhaps, after all, showing the control room right from the beginning was possibly the better idea. No twist that feels forced and artificial...

I don't know. It would be interesting to compare public reactions to two versions of the movie.

Obsessive vegans

Vegetarianism/veganism ranges quite a lot between people. The mildest vegetarians simply don't eat meat if there's a vegetarian choice, but don't worry too much if they have to eat a bit of meat because there's nothing else. They might also have no problems in eg. eating fish. They usually don't have any problems in eating animal products that aren't meat (such as milk products and eggs.)

Vegans, unlike vegetarians, do not eat any animal products (not even milk products.) The most open-minded vegans, however, do likewise not worry too much if there's a situation where there's no choice than to eat some animal products (or even outright meat.) They do not adhere to veganism religiously, they simply follow it given the choice, but do not stress too much about it otherwise.

The type of vegan that really amazes me, though, is the obsessive vegan. Not only do they avoid all kinds of animal products religiously, they are in fact pathologically obsessive about it. They treat animal products like a person who has an extreme form of peanut allergy treats peanuts. Even a single molecule that has come from an animal is too much!

I'm not exaggerating a bit here. For example, I was once queueing at a Subway, and before me there was this kind of hippie-looking young man with dreadlocks and clothes that look like they are made of burlap sack material. (He even had a bag that was made of the same material.) I think you know the type. Because it was a quite busy hour of the day, the Subway employee was dealing with two clients at a time (ie. making two breads at the same time.)

When the hippie before me, and the person before him, got their turn, the latter asked for some bread containing sausages and the hippie asked (unsurprisingly) for the vegetarian break. The employee asked him if he minded if she made the two breads at the same time, as the other bread would contain sausages. The hippie refused. Those sterile gloves were not going to touch the sausages and his vegetarian bread. The employee had to make the breads one at a time (and change gloves in-between, of course), delaying the entire queue.

I don't understand what exactly is it that this hippie is trying to achieve with this. The whole situation felt like he thought of the whole meat thing as a religious profanity, and if even a single molecule of meat entered his mouth, he would be unclean and sacrilegious.

Such vegans only make their lives (and sometimes other people's lives) a lot more difficult than it has to be. They have to watch everything they eat, like if they were extremely allergic. And for what? What good does it do to anybody?

Friday, October 12, 2012

Misunderstandings about speed of light limits in fiction

Almost 100% of fiction out there dealing with space travel ("almost" because there are a few exceptions, mostly some sci-fi novels where the author knows better) with respect to the "magical" limit of the speed of light.

Basically, most sci-fi authors only know the "headline" version of the theory of relativity. Namely: You can't travel faster than the speed of light (in vacuum), period. (Not by any conventional means, at least.) Thus to get past this annoying limit, they invent all kinds of fictitious modes of travel, such as "hyperspace" and "warp speed" and whatnot.

They seem to think that, for example, if you wanted to travel from Earth to Alpha Centauri, it would take you at least 4.3 years to get there (using conventional means of travel), no matter what. It's (according to their limited understanding) physically impossible to get there faster. Likewise if you wanted to travel to the Andromeda galaxy, it would take at minimum 2.5 million years. In fact, the vast majority of people think like this.

It's not that simple, though, and the misunderstanding stems from not understanding the theory of relativity properly. The best example of why they don't understand it is that they have often heard of the so-called "twin paradox" but they don't understand what it means or how it works. (The "twin paradox" is the seemingly paradoxical prediction of the theory of relativity that if a person were to travel at a great speed away from Earth and then return, that person would have aged less than their twin who stayed on Earth the whole time. The seeming "paradox" comes from the question of why it's the traveling twin that ages less and not the one who stayed, even though the situation seems symmetrical from both of their perspectives. The answer lies on the changing frames of reference of the traveling twin.)

The fact is: There's no limit to how fast you can reach a distant point in space. You could travel from here to the Andromeda galaxy in one second. (This of course requires you to travel really, really close to the speed of light, but at no point would you be outrunning light.)

The vast majority of people would immediately protest to that claim, and say that it would break the prediction of the theory of relativity. This is rather ironic because relativity precisely says what I just wrote above. There's a great misunderstanding here.

The point is: Even if you are traveling from here to Andromeda in one second, you would at no point be traveling faster than light. At no point would you outrun a photon that was sent there at the same time. (In fact, if you were to measure the speed of that photon, you would measure it to be exactly c, even though you yourself are traveling at almost c, which seems really contradictory. But that's just how relativity works.) In fact, from light's own perspective it takes 0 seconds to reach Andromeda. In other words, from the light's perspective it takes no time at all to arrive there. Again, it seems unintuitive, but that's just how it is.

However, from Earth's perspective it will take you 2.5 million years to reach Andromeda. In other words, if you were travel from here to Andromeda in 1 second and then back in 1 second, you would encounter an Earth that's 5 million years older than when you left (even though you have only aged by 2 seconds.) That's how relativity works.

There is, however, a practical reason why you couldn't travel from Earth to Andromeda in 1 second: Inertia. The acceleration required to reach the necessary speed would be so immense that it would squeeze your atoms together (probably causing a thermonuclear explosion when your atoms fuse together due to the acceleration.) You (and your ship) would be completely annihilated in a fraction of a second.

To understand how big of a problem this is, assume that you were to travel just to Alpha Centauri in a spaceship that first accelerates at a comfortable rate of 9.8 m/s2 (ie. you would experience the same acceleration as on Earth) to the mid-point, and then decelerates at the same rate until it stops at Alpha Centauri. It would take you a bit over 3 years to get there like this. To get there faster you would need to accelerate more, which would become the more uncomfortable the faster you wanted to reach your destination. (The human body also probably would start showing adverse effects if exposed to unnaturally high acceleration for prolonged periods of time.)

The biggest problem reaching distant stars and galaxies is not, thus, any limit in speed (at least if you ignore that Earth will start aging much faster than you), but the limits imposed by inertia on the travelers (and the spaceship itself.) Of course there's also the question of how to produce all the energy needed to reach these speeds (which is also a rather big limiting factor), but that's another question entirely.

The only way to achieve this would be to somehow nullify inertia. As far as I know, modern physics knows of no phenomenon that could be used to achieve this. (Hypothetically you could ostensibly bend space in a manner that causes you to "fall" towards the desired direction. Thus you would be "free falling" and not accelerating, and hence inertia wouldn't be a problem. However, as far as I know, there's no known mechanism to bend space like this. And even if there were, it would probably require impossibly large quantities of energy.)

Sunday, October 7, 2012

Feminism should be about equality... but isn't

The basic tenet of feminism is equality. Everybody should be treated equally completely regardless of their gender, have the same rights and duties, the same opportunities, and be respected in the same way as anybody else, completely disregarding such an inconsequential thing as gender (when talking about things where gender should in no way be an issue.)

While there's still a lot of sexism and inequality in this respect, even in the civilized world, the feminist movement has in some ways succeeded in actually doing the opposite of this basic tenet: People are actually afraid to treat women in the same way as men for example in contexts like how they are depicted in fiction and art. If female characters in fiction do not get special treatment and are depicted in such a way that their gender is a complete non-issue, the author can actually be accused of sexism! This is the complete opposite of what equality should be!

As an example, consider the following Magic: The Gathering card:

About 90% of the discussion at the official Magic: The Gathering website about this particular card consists of people talking about sexism (with some people accusing the art for being sexist, while others arguing that it's a completely ridiculous claim.)

Note that the art is not gratuitous, but actually is completely in accordance to the in-universe depiction of those two characters. The woman is basically an extremely powerful and evil necromancer witch who has no qualms whatsoever in making people suffer and to abuse people for her own gain and to advance her goals. These two characters have fought several times previously, and the woman has previously cast a curse on the man (which is what the quotation at the bottom of the card refers to.) This card depicts another fight between these two characters.

If the situation is reversed, and it's the woman who is depicted in a dominant position, nobody complains about sexism. This is not just a hypothesis, but actual reality, as there's a counterpart to that very card where it's the woman who is abusing a male character, namely this:

This is the same female character (but a different male one), but now it's the woman who is in a dominant position, making the man suffer using her supernatural powers. It's not the only example; another one is this:

Nobody complains about "sexism" in the art of these two latter cards, even though the situation is basically the same, just with gender roles reversed.

If there were true equality between genders, nobody would mind which character has which gender. More importantly, though, there would be no double standard, where accusations of sexism are thrown only on one situation but not when the roles are reversed. This is just hypocrisy.

Religion and politics in the United States

Although I'm not an American, I find the political situation of the Unites States quite interesting. In a morbid way. Not because of presidential elections or anything like that, but on a larger scale.

The political landscape in the United States has changed radically during the past 10 years or so, specifically in relation to religion.

You see, 20-30 years ago it would have been a political suicide for a politician to refer to his or her own religion in the United States. Making a reference to being, for example, a pentecostal or a baptist would have driven away all the voters of other denominations. ("Oh, he's a catholic. I'm certainly not voting for him! They are the church of Satan!" "Oh, she's a baptist. They are nutjobs! Vade retro!")

This has turned completely on its head: Nowadays it would be a political suicide to not make references to the (Christian) religion and to not profess one's religion. What was more or less a practical taboo something like 20 years ago is now a practical requirement if one wants to get even a slight chance of getting elected. Nowadays it doesn't really matter if you are a catholic, a baptist or a mormon (no matter how wildly different and conflicting those denominations are.) As long as you talk about "God" (always in a vague sense, of course) and how the United States is a religious country, it doesn't really matter which particular denomination you belong to. What was once a show-stopper is now a non-issue, as long as you profess "God" (without going too much into specifics.)

During the last decade or so, the war on science, secularism and even education has escalated to incredible proportions in the United States. The biggest enemy of the country was once the "communists", but today they have been replaced by "atheists". They are the source of all that's wrong with the country and the entire world. They are the modern-day witches. It's actually amazing how ridiculous the conspiracy theories about atheism can be there, and how seriously many people believe them.

There has also been a really strong historic revisionist movement in the country. That is, they are trying to rewrite the history of the country to be almost the exact opposite of what it was. It's a well-known fact that a significant incentive of making the United States independent from England was to escape religious persecution of the church, and to create a nation where there is religious freedom and nobody is forced to belong to any specific religion, or any religion at all what that matter. The government shall not make any law imposing any kind of religion, there shall not be any religion-based requirements for elected officials, all citizens have the right to freely have and express any opinions, and so on and so forth.

There are serious attempts to turn this on its head, attempts to rewrite history books to make the country a theocracy, founded on religious principles, and basically defile the entire idea that the founding fathers of the United States worked so hard to achieve.

The situation has escalated more and more during the last years. I have been wondering: Why now?

Maybe it's all the terrorism and extremism? The economic recession? Scary environmental prospects promising a grim future? A combination of all of them?

It's probably a form of scapegoat mentality. People feel insecure and unsafe, and they want to find a culprit. If they just could find the culprit and get rid of it, everything will become good and well once again, like it was in the golden era of the 1950's. The communists aren't really that big of a threat anymore, so there must be some other culprit. Who are the witches that must be hunt down in order to get the country rid of this curse?

Saturday, October 6, 2012

Failblog's fall... and others

For years was basically one of the best online blogs in existence. Its major shtick was to publish (at least daily, often even several times a day) images and videos, sometimes other forms of media, of people or things failing in some manner (usually hurting themselves in the process, although that was certainly not the only type of failures), such as for example skateboarders failing a jump and faceplanting, boats getting destroyed by a crane accidentally failing and dropping it, and so on and so forth. (The blog has a strict policy that they will only publish fails that remain at some level of good taste. No people getting killed or seriously injured for life, for example, and no gore. Emphasis on humor, not on morbid curiosity.)

It remained like that for a rather long time, and I was a very avid follower.

Then at some point it started to change. While failblog has always published other kinds of things (mostly related to internet memes) from time to time, it was quite rare, and they mostly concentrated on actual fails. However, at some point they started publishing more and more material unrelated to fails and more related to whatever internet meme was popular at the moment.

For a time it was still not all that bothersome because the quantity of actual fails was still pretty high. However, as time passed basically turned into a blog of internet memes. The amount of actual fails plummeted to almost nonexistent. For example, last time I checked it, something like one entry out of twenty was what could be considered an actual fail (even by the loosest definition) while the rest were completely unrelated. (If counting in a stricter manner only the type of fails that they used to publish in the past, the number is even smaller.)

I struggled with it for several months, perhaps even almost a year, but at some point I got completely tired and stopped following the blog altogether. It was outright boring, and wading through the hundreds and hundreds of entries to see the half dozen of actual fails was a pain. It made no sense.

They should really rename it to "" or something similar, because the name has become a complete misnomer. It's a real shame because it was one of the greatest blogs in existence, but now it just sucks.

Of course failblog is not the only online medium that has suffered from this. Many blogs seem to go through this, but also other types of websites, such as webcomics.

One of the most infamous and known ones is probably MegaTokyo. When it started, it was a light-hearted, humorous, "animesque" (well, "mangaesque" would be more accurate) webcomic that was fun to read and follow. It remained like that for several hundreds of issues. Then it slowly but steadily changed.

There was apparently more reasons for this change than just the authors shifting genre, as it seems that one of the two authors got into a dispute with the other about the content and leaving. Anyways, regardless of what the actual reason was for the changes, they were for the worse. While the quality of the artwork became significantly better and more detailed over time, the same cannot be said of the storytelling. Ok, the storytelling became more detailed, but it did certainly not become better, but the exact opposite. The storytelling slowly became complex, intricate, quite hard to follow and outright boring. It was also paced and depicted in a manner that made it even harder to follow, jumping all over the place with no well-defined structure. Only rarely was there any humor left, and the vast majority was just a complicated mess.

It doesn't help that the author seems incapable of establishing characters properly, in a manner that readers will easily remember and recognize them. There are tons and tons of characters, but it's very hard to remember who is who, what that character's role was in the overall story, and why he/she is doing whatever he/she is doing at the moment. It doesn't help either that many characters look very similar and thus are easily confused with each other. (This is very much unlike what the comic was at its beginning, when there were just a few characters that were very easily recognizable and easy to remember.)

This is, of course, not the only webcomic that I have stopped following. I used to read Least I Could Do, but I haven't been following it for years. (It went through several instances of complete art shift, due to changing artists, but that's not the reason. The reason is that it just became boring, repetitive and lacking of fresh ideas and content.)

One of the webcomics that are still widely regarded as one of the best, but which I stopped following out of pure boredom, is Penny Arcade. The same story repeats itself: For years it used to be smart, witty and funny, poking fun on video games and related stuff. Then it slowly started being more and more obscure, less and less funny, the actually funny strips becoming rarer and rarer... At worst I could shift through about 50 issues and find maybe 2 that were actually funny. I just stopped following it.

Friday, October 5, 2012

Video quality of youtube wrestling videos

This is a really short one, but...

At least 80% of wrestling videos found on YouTube have a completely abysmal image quality. They are really low-resolution and compressed beyond belief. Many of them are so compressed that it's almost impossible to discern what's going on, and even the good ones have really visible compression artifacts.

I would understand that if that was the norm with all YouTube videos, but no. The vast majority of anything else you find there is just ok. Except wrestling videos. What gives?

Thursday, October 4, 2012

It's trendy, therefore it sucks

There exists a rather curious type of person. This is typically a young adult, or at most middle-aged, and they might in fact be computer adepts, if not even outright computer nerds, who like technology, innovation and progress... yet they still somehow manage to act like an old fart who detests everything that's new and trendy, who's constantly saying "bah, these kids today and their shiny gadgets... back in my day..." (They don't literally say that, but they act like it.)

It seems that these people detest and denigrate anything that's new, flashy and popular, for the sole reason that it's popular. Especially if it's new technology. Out of principle, not because of any actual rationale. If it's trendy and hip, it must suck.

There are people who still detest and denigrate, for example, the iPhone. Not for any rational reason, but just because it's popular. They also usually denigrate even the idea of something like browsing the web using a cellphone, often without ever having even tried themselves (or if they have, they have done it for like 10 seconds, with a highly biased attitude.)

They will often use exaggeration to try to make things sound worse than they are. They will describe cellphone displays as being "postal stamp sized", they will say how you have to "hold it a few inches from your eyes to see anything" (that's something like 10 cm, give or take) and so on. They will also invent flaws that the thing doesn't have (such as "the screen gets dirty in just a few minutes, making it unusable.")

The technology doesn't even have to always be new and flashy. It's enough for it to be popular. For example quite many people even today outright refuse to make a Facebook account, even if it could be useful to them (eg. to communicate with a playgroup or similar.) Not because of any rational reason, but solely because it's popular. Mind you, these exact same people might have accounts in a dozen of random online forums that they frequently visit. What makes Facebook so different from them, they cannot say (other than an implied "it's popular, therefore I oppose it.")

Sunday, September 30, 2012

Evil Dead 2

There seems to be a really strange consensus that the movie Evil Dead 2 is better than the first The Evil Dead movie. Having seen both several times, I just can't comprehend the reasoning.

The first movie is a very low-budgeted pure horror film. Regardless of its extremely low budget, it's really well made. The authors really utilized every limited resource they had to make the best film they possibly could, and it really shows. Many of the special effects might be simplistic and antiquated even by the standards of the time, but they are surprisingly well made and effective taking into account what they had to work with and how little money they had.

In short, as a horror film it's really effective and well made. It's gory, it's gritty, it's gruesome, it's seriously made, and it doesn't shy showing you the goriness in full detail.

The second movie is not a sequel. It has a relatively short segment at the beginning that's a kind of remake of the first movie, changing many details (such as the amount of people who went to the cabin the first time), and then there's a continuation to this half-remake, which constitutes the rest of the movie. (AFAIK the reason why it's not a pure sequel and why they "remade" the first movie with changed details as an introduction has something to do with the authors having lost the rights to the original movie or something like that.)

The three major problems with the second movie are that it's not a horror film but instead a comedy ("horror comedy" some would say), it does not take itself seriously and instead goes for a supposedly "badass" protagonist (that's more hammy than "badass"), and unlike its predecessor it censors itself from showing the gorier scenes, which makes little sense.

In short, it's like a bastardized version of the first film that has been remade and self-censored to get a lower MPAA rating, and which substitutes pure gory horror for slapstick comedy. And for some inexplicable reason most people think it's better than the first film!

They are going to release a(nother) remake of The Evil Dead in 2013. I already know it's going to suck because I'm almost completely sure that they will make it like the second movie instead of like the first one, just because the second one is considered better. Kudos to them if they make it a pure horror movie, but I'm not holding my breath.

Thursday, September 27, 2012

Firefox version numbering

Version numbering of software products is far from being a standardized thing, but the most common convention is to have something along the lines of:

<major version>.<minor version>

For example the version number "2.1" means major version 2, minor version 1. (Generally the major version starts from 1 and the minor version from 0. A major version 0 is often used to denote an alpha or beta version that's not yet complete.)

The major version number usually indicates some kind of significant milestone in the development of the program, and is usually accompanied by significant improvements or changes. Sometimes it could mean a full (or significant) rewrite of the code base (even if outwardly there's little visible change). Regardless of what exactly is it that has changed, it's usually a very significant major change (either internal or externally visible).

Some projects keep the major version so significant that they hardly ever increment it, and reserve it for really huge milestones (such as rewriting the entire application from scratch or changing it so much that it's effectively a completely different application, even if its purpose remains the same.) The Linux kernel is a good example of this (which was only incremented to 3 recently, even though it has been under constant development for over 20 years.)

The minor version is usually incremented when new features and/or a significant amount of bug fixes are introduced. In many projects even relatively major new features or improvements only cause the minor version to be incremented.

Some projects use even further minor version numbers. A typical custom is to use a third number (so version numbers will look eg. like "2.5.1") which often denotes simple bug fixes or cosmetic changes, but generally no new or changed features. Some even use a fourth number (such as the abovementioned Linux kernel) for an even finer distinction between bugfixes/cosmetic changes and actual feature changes (so that such changes can be divided into minor and minor.)

Anyways, at the very least it's a good idea to use a two-numbered versioning scheme to indicate a clear difference between major and minor changes. This is very informative. When you see the major version number increase, you know that something really big has happened, while the minor version number just indicates some more minor changes.

The Firefox project used to use this kind of version numbering for almost a decade (with the major version slowly incrementing from 0 to 4 during that time). This was quite informative. When the major version increased, you knew that there would be some significant changes to the browser.

Then, for some strange reason, they threw that away. Now they are using the major version number for what almost every other project in existence uses the minor version number for, and they practically keep the minor version always at 0. They keep incrementing the major version number each month, just for some minor improvements and added features.

I do not understand the rationale behind this. The version numbering of Firefox has practically lost its meaning and is useless. It's not anymore possible to tell from the version number if one should expect really big or just minor cosmetic changes.

Moreover, if they ever make a really significant change or improvement to the browser, nobody will hardly notice because the version numbering has lost its meaning and it cannot be used to convey this information anymore. No more will you see headlines reading like "Firefox 3 released, major improvements." It will just be "so they released Firefox version 57, so what? What did they change? The color of the close tab button?"

Saturday, September 15, 2012


In my old "blog" (of sorts) I have written extensively about conspiracy theories and believers in them, and the reasons why people believe in them.

One aspect of this is, I think, that believing in conspiracy theories is a form of pseudointellectualism. Especially people who have memorized hundreds and hundreds of arguments and can flood a discussion with them in a form of rapid-fire and shotgun argumentation, probably get a sense of being quite smart and "educated": They have the feeling that they are experts on the subject in question and possess a lot of factual knowledge about it, and thus can teach it to others and use all these "facts" to argue their position and win any debates.

In other words, they are pseudointellectuals. They feel that they have a lot of factual knowledge on the subject, and they might get a sense of intellectual superiority, even though in fact they are just deluded. They are often good at debating and arguing their position, but they do not realize that they are spouting nonsense. Of course this nonsense is wrapped in tons of seeming logic and apparent valid arguments, which superficially might sound plausible when you don't examine nor study them too profoundly, but it's still just nonsense.

The same is true not only of conspiracy theorists but also of creationists, ufologists and believers in the paranormal. (When we examine all these positions closely, there actually isn't much difference between them. They are all basically religious belief systems.)

I think that it's precisely this sense of intellectual superiority that makes at least some of these people so firm in their beliefs. They might not admit this even to themselves, but deep down this feeling of superiority makes them feel good, better and more "educated" than other people.

In fact, some pseudointellectuals use precisely their own sense of intellectual superiority to belittle their opponents.

Perhaps the most prominent example of this, a pseudointellectual on a completely different level of its own, is William Lane Craig. He feels superior for having an education on "philosophy" and has several times belittled his opponents for not having such an education and therefore not having the necessary "qualifications" to debate him on the same level.

This is just an outstanding level of douchebaggery. An incredible amount of smugness. And this even if philosophy were any kind of relevant field of science (which it isn't, which just makes his attitude even worse.)

I don't think it's easy to surpass this level of pseudointellectualism.

Tuesday, September 11, 2012

Show, don't tell?

"Show, don't tell" is one of the rules of thumb of proper storytelling in a visual media (such as movies, TV series and comics). It means that, in general, it's better to show something happening rather than just telling what happened. It can apply even to written stories, where it means that the events should be "shown" as a narrative, rather than being explained.

This is not, of course, a hard rule. Sometimes it's better to just tell something as a quick summary rather than going to the lengths of actually showing the events in full. Too much "showing" can actually be more boring than just quickly telling what happened.

What grinds my gears is when people use the "show, don't tell" argument to criticize works of art in situations where it really doesn't apply.

There are excellent examples of things not being shown, just hinted at in dialogue. For example, consider the famous "hamburger scene" in the movie Pulp Fiction. The dialogue, and in fact the whole situation, makes reference to those people having screwed up Marcellus Wallace somehow, yet we are never shown what happened. We are only told that it happened. However, we don't need to be shown. In fact, the scene would actually not be as good if it spent time showing what happened, rather than just telling about it through the dialogue. As it is now, the scene is just superbly done. (In other words, rather ironically, the scene becomes better when it does not blindly adhere to the rule, and instead breaks it.)

I think that sometimes people use the "show, don't tell" as an excuse to criticize a work of art that they don't like. For example, an acquaintance of mine criticized the speech made by the Architect in the second Matrix movie for breaking "show, don't tell". I honestly cannot understand what exactly he was suggesting. I think the speech is just spot-on and does not need any "showing". (I brought up the Pulp Fiction scene as a counter-argument and, according to him, that was different. I was unable to get a clear explanation of why.)

Monday, September 10, 2012


At least 90% of internet forums out there have a strict rule against so-called necroposting. This is defined as responding to a thread that has not had any activity in a long time. (The amount of time varies from forum to forum, ranging from years to just a few months.) Necroposting is somehow considered a really bad breach of netiquette or something. If anybody necroposts, an angry swarm of people will immediately castigate the culprit with angry reminders that the original thread died several months ago! In fact, a few forums even go so far as to automatically lock threads that have not had any activity in a given amount of time.

I have never understood (nor will ever understand) what exactly is so bad about "necroposting". None of the arguments given against it make any sense.

So what if a thread has not been active in many months, or even years? Someone might still have something new to add to it. It could be a new perspective, a new idea or even an update of recent events related to the subject of the discussion. Posting it in the thread that discussed the subject keeps it in context, keeps the forum better organized and doesn't scatter discussions on the same subject on different random threads.

Someone might respond with something that has already been said in the thread (eg. because of only reading the first post and responding to it, instead of checking the rest of the thread first). However, this has nothing to do with necroposting. People do that all the time even in threads that have had abundant activity even during the last hour. Reprimanding such a person for doing that just because the thread happened to be old (but not doing so if the thread happens to be recent) is irrational, inconsistent and outright mean.

Fortunately there are some online forums that do not have any such nonsensical rule. Yet even in those you can see people complaining if someone necroposts (and then regular users telling them that "we don't have such rule here".) The irrational hatred of necroposting wants to spread even to forums where it doesn't exist...

There is also another, potentially positive aspect to "necroposting": It draws attention to old threads that might be of interest to new people. When a forum has thousands and thousands of threads, nobody is going to wade through all of them. If someone, however, makes a post to a years-old thread, it's often because the thread is interesting, and the post will bring it back to the top (or at least to the list of threads with unread posts.) Some people who had never seen that thread may find it interesting. (And this isn't just speculation. Recently in a forum I witnessed exactly this happen. Someone "necroposted" on a very old thread, and somebody else commented that the thread was actually interesting, and expressed gratitude on drawing attention to it.)

The aversion to necroposting needs to die. It makes no sense.

Friday, August 31, 2012

Indiana Jones 4 hatred

The fourth Indiana Jones movie is universally hated. What is the most commonly cited reason for this?

Aliens. That's it. Aliens. Sure, there are many other annoyances and defects often cited as well, but the aliens are by far the most common element to all complaints. In fact, it's the first and often even the only relevant thing that reviewers and other people mention.

Ok, then there's the fridge, of course, which is probably a very close second most commonly cited reason. (In fact, citing the fridge as a reason for hating the movie is even more irrational than the aliens.)

Let's put this into perspective and compare it to the original three movies. A supernatural chest that when opened releases some ghostly supernatural energy that kills everybody not worthy? That's completely ok. A secret cult that can, among other things, rip the heart out of somebody's chest with bare hands, while the victim remains alive through the whole process? No problems. A cup that grants eternal life? Completely normal.

But aliens? Nooooo! That goes too far! Aliens is way too much! And the fridge too!

What's happening here is a really strong case of nostalgia filtering. The people who complain about the fourth movie are mostly people who saw the first three movies as kids and loved them. It excited their adventurous imaginations. Now, as adults, they have these nostalgic warm memories of those times, when movies were wondrous escapes of reality into fantasy worlds full of adventure. Now as adults they cannot get that feeling anymore, and they are much more critical of movies. Therefore if a new sequel is made for the movies they loved as kids, they will be extremely critical and cynical about them. There's no way the new movie can grant them the same excitement and wonder.

If the fourth Indiana movie had been made back then, alongside the first ones, with all the aliens and fridges and whatnot, the attitude towards it today would be completely different. Today's adults would remember it fondly and consider it a good movie.

And conversely, if for example the third movie had been done now, people would hate it, even if it were identical. People would always find some ridiculous things to complain about.

Wednesday, August 29, 2012

When will we see an actual Batman movie?

The 60's Batman TV series was basically a farce. It was basically a time when, for some inexplicable reason I cannot comprehend, TV and film producers thought that a superhero series/movie should be a wacky comedy and be really over-the-top. (I really can't understand where the connection between "superheros" and "comedy" comes from. In my mind there's a disconnect of the size of the Pacific ocean between them.) In fact, that mentality persisted for a surprisingly long time (even in the 90's and 2000's we were still getting superhero movies that were more comedy than anything else.)

Tim Burton's 1989 Batman and its sequel are an attempt to make an actual Batman movie (which sadly sunk once again into a comedy in the hands of Joel Schumacher, a turn of events that's best left forgotten in the annals of history.) It was ok'ish... kind of. Yes, it was something resembling Batman, but... not really. Batman's suit is not like that, he is not really like that as a persona, the Joker is definitely not like that, the universe is not really like that... It's just not Batman. It's something that resembles it, but isn't really.

Christopher Nolan made his own version of Batman as a trilogy between 2005 and 2012. While this trilogy is highly praised and some of the most profitable and most well-received movies of all time... I'm sorry to say, but it's once again not Batman. It's something that resembles Batman, but just isn't. It's gritty, it's badass, it's kind of realistic... but it's just not Batman. It's like a parallel universe Batman that closely resembles the actual one, but just isn't the actual one. And once again the Joker is good on its own right... but it's not the actual Joker. It's something that may resemble the original, but just isn't.

(And again, aaargh, the costume. Why do these people insist in putting Batman into a bulky full body armor that makes him like the Michelin man who can't even turn his head? How exactly is he supposed to fight anybody inside that? It's not possible. The Batman in the comics does not use a rigid full body armor because he doesn't need it. He's that badass. He dispatches enemies using stealth and psychology, not by standing in front of them waiting for them to shoot. He's a ninja, not a human tank.)

Another thing that all these movies completely forget is that Batman is a detective. (After all, he's supposed to be the greatest detective in the world.) This aspect is completely lacking in all the movies. There's just no trace of it.

The closest thing we have got to an actual Batman "movie" is the Batman: Arkham Asylum and Batman: Arkham City video games. Now that's what I'm talking about. Here's a gritty(ish) Batman and cast of allies and enemies that looks, feels and acts like the real Batman, while still maintaining the kind of "innocence" that's part of the fantasy world of the Batman universe in the comics. Here Batman is Batman, the Joker is the Joker, Catwoman is Catwoman, and basically every single character is the character from the comics, and the entire setting is that of the comics. There are no stupid changes to make it "more realistic" or "more plausible" or anything like that. This is the comics Batman, pure and simple. No compromises, no bullshit. Just pure unadulterated Batman, no more, no less.

That's what I want from a Batman movie.

Tuesday, August 28, 2012

Annoyances when searching the net for info

This is a really small thing... but I think every software developer has been there, and it can get pretty frustrating.

If you are a long-time developer, you have most probably experienced it: You encounter a problem (like a really strange error message, or a strange bug with some library that you just can't understand eg. because the documentation of the library is lacking or other reasons) and you try to search for a solution online. Surely others have had the same problem and solved it.

Very often this is so, and you usually find the answer in the first few google hits. Sometimes, however, you will see someone asking the very question you are looking for, and then answering their own post with just "never mind, I found the solution", and never explaining what the solution was. You are left with nothing.

A lesser form of this is when someone asks the question, another person answers it, and the first person just answers with a "thanks, I will try that to see if it works", and never following that up with a report of whether it did work or not. This is, of course, a much more minor form because you can try the solution yourself. However, it's still a bit annoying because if the original poster had reported that the solution works, you would be surer of it before starting to test. (After all, the person who responded could be simply guessing, and be wrong.)

Why not try the solution first, and then thank the person who gave the answer?

Saturday, August 25, 2012

Programming job interviews

One thing I detest about job interviews is that you have to lie even if you really mean to be honest. You have to lie in order to convey your true skill properly. (Not that I have extensive experience on job interviews, but this is from what I have gathered.)

For example, suppose that you are an experienced programmer and have a good grasp of how imperative/OO languages (either compiled or scripting) work, and have extensive experience on some languages, but only a very modest understanding of PHP in particular: You know the basics, you have perhaps written a hundred of lines of it in total, but you know how it works and what it offers. Most importantly, if you had to, you could quickly learn to use it proficiently and competently.

However, job interviews don't generally ask you that. Instead, they ask you how much you have programmed in PHP.

You have two choices: Tell the truth, or "stretch it a bit".

If you tell them that you have only minimal experience of PHP in particular, they will probably mark you as not a very good candidate for a PHP programming job. Your assurances that you can learn the language quickly and that's not just BS will probably not help much.

The other possibility is to outright lie: You can claim that you have programmed in PHP quite a lot.

In a sense you are not "lying" per se. Rather, you are answering the question that they really want to ask, rather than the question they think they want to ask. What they really want to know is how easily you could start programming in PHP, not how much you have programmed with it in the past. (Of course having a lot of experience in PHP programming always helps, I'm not saying it doesn't. However, even more important is how much programming experience in that type of language you have overall, not how much you have in PHP in particular.)

However, in order to convey your true expertise you have to lie. The bad thing about this is that you can get caught redhanded. If they start asking some minutia about PHP you might not know the answers on the spot, and you will end up looking like an opportunistic liar.

They might well end up hiring someone who has programmed more in PHP (or at least claims to be) but who's not very good at it.

Thursday, August 23, 2012

TV live show editing

Watch this comedy routine by Abbott and Costello performing their famous "Who's on first" sketch. Watch it fully and then come back, as I have a couple of questions to ask about it.

Question 1: How many times did they show the audience?

Answer: Zero times.

Question 2: How much did it bother you that they didn't?

If you are a normal person, I am pretty sure that you didn't even notice this until I drew attention that fact. It certainly did not bother you at all.

If this were being televised today as a live show, at least 50% of the footage would be showing the audience reactions. This is something that bothers me to no end in today's TV show editing.

If I'm watching some performers doing an act (be it comedy, magic, juggling or whatever), I want to see the performers. Why would I want to see the audience? What possible interest would I have in that?

Of course it's not the act of showing the audience itself that's so bad. It's the fact that I do not get to see the performer while the audience is being shown, and in fact the performance is being constantly interrupted, which is really, really annoying.

Most performers have practiced their routines over and over in order to make an enjoyable viewing experience for their audiences. Everything they do is to entertain the audience. (It would be quite catastrophical if the audience got bored.) Hence every single thing they do, from beginning to end, is for the benefit of the audience. Every single thing they do is highly rehearsed and trained to be as interesting and enjoyable as possible.

And then TV directors butcher this highly polished act into bits and pieces, censoring half of it, completely destroying it. In the worst cases I have seen they show the audience even more than the performance itself, even in the middle of a routine. (This is especially annoying with performances that are long and continuous, without pauses, such as juggling.)

In fact, this practice bothers me so much that whenever I see eg. a YouTube clip or whatever of a TV show where there's this kind of editing, I just stop watching it. I simply can't stand it.

Why do they do this? Don't they understand that they are destroying the performance and annoying the viewers? Yet they keep doing this over and over, and have been doing this for decades, as if it was some kind of good live TV editing.

Wednesday, August 22, 2012

Graphical user interfaces going bad

Once upon a time, when the industry had a good decade or two of actual user experience on graphical user interfaces, a set of good design rules were established. Most operating system development companies even had their guidelines for developers on how to create a standardized GUI for their programs so as to make them as easy and intuitive to use as possible.

These are mostly small things, but they are important. For example, if a program has a menu (as most graphical programs do), it should always be located in the same place in all programs (at the top, below the title bar) and there are certain menus that should always have the same name (such as "File" and "Edit") and contain the same basic set of commands (such as "Open" and "Save"). If a menu element does an immediate action, it should be named without any punctuation (eg, "Save"), but if it does not immediately do something but instead opens a dialog where more options can be specified, its name should use an ellipsis (eg. "Save as...") Dialogs should always have a certain set of buttons named in a certain way (and, in general, they should always have a button to cancel the action.)

And so on and so forth.

The purpose of all these rules and guidelines is to unify all programs, make them use a standardized format and layout for common tasks, and thus make it as easy as possible for users to learn to use a new program. Also, the rules are intended to make it easy and intuitive to know what something does (for example, as mentioned, if a menu element does not immediately perform an action but opens a dialog, and hence it's "safe" to select it without the danger of it doing any modifications, its name will have an ellipsis in it. This is an intuitive and standard way of knowing this, with a very small formatting guideline that might feel insignificant in a completely different context. It's all these small things that make a good GUI a good GUI.)

There are also many guidelines and principles on how to design a good GUI on a higher level. An example of such a principle is "if you feel the need to add a text label explaining the usage of some GUI element, you are probably doing something wrong". The usage of GUI elements should be, in general, intuitive and easy to understand without textual explanations.

Other useful guidelines include things like what color schemes to use in the application. These are often aimed at making the application easy to use for beginners and people with disabilities (such as poor eyesight.) For example, making gray text on slightly grayer background can be a bad idea because people with poor eyesight may have difficult time reading it. (The ability to distinguish between low contrast elements is often poorer with old people.)

Whatever your opinion might be of Microsoft, Windows 3.x was actually pretty decent in terms of GUI design. All standard window and dialog elements were clear, consistent, and easy to understand and use, and if a program closely followed the standard guidelines of Windows GUI design, it was significantly easier to learn and use than a program that deviated from it.

Lately, however, it seems that many companies are completely ignoring these useful guidelines, and aiming for something that they seemingly think "looks cool" rather than is usable.

A very common trend seems to be to hide things from the user by default. This trend probably started around Windows 95. While Windows 3.x always showed file names in full, at some point newer Windowses started hiding file name extensions by default. In fact, they started hiding almost everything by default: You only got a name (without extension) and a program icon.

That's right: A program icon. Not an icon representing the file type, but the icon of a program (the program that would open it.) This means that if you had two files with the same name but with different extensions (like for example "image.jpg" and "image.gif"), both of which were opened with the same program, they would be completely identical in the file browser. Not visual distinction whatsoever. It's impossible to tell them apart without doing some additional operations to find out. This is most certainly not good GUI design.

In Windows ME one of the most braindead ideas ever to come out of any company in existence was defecated by Microsoft: Let's hide "unused" menu items in menus. And, while we are at it, let's reorder the unhidden ones for good measure. This goes against all good GUI design principles in existence, it's a horrible, horrible idea, and should have never been even thought. Menus become basically the antithesis of what good GUI design is.

In Windows 7 Microsoft went even further with all this "let's hide everything from the user" ideology: Now they hide menu bars and title bars by default. This is supposed to make programs look hip and cool. However, it significantly decreases the usability of everything because now you have to do extra steps to get to a menu, and you don't get any useful information that a program may put in the title bar. (For example, web browsers typically put the title of the web page there, and text editors the title of the document you are editing.)

(I have never understood why Microsoft hates menus so much. They seem to be doing their hardest to get rid of them. What exactly is so wrong with menus? They are clear, intuitive and easy to use, they categorize actions in a hierarchical and intuitive manner, and they don't clutter the GUI because the dozens and dozens of possible actions are neatly packed into expansible menus. The more actions a program can perform, the more important is for it to use menus for this, rather than something else, especially since menus are a good solution and have been used for this purpose for many decades.)

One of the key GUI design principles is that buttons, clickable icons and anything that can be clicked should look like it, and if they are disabled, they should clearly look disabled (by usually being colored in dim grays). This is another rule that Microsoft has liked to break for a long time: Clickable icons are no longer distinguished from just decoration because their borders are hidden (are we seeing a pattern here?) unless you hover them and, what's worse, in some cases they are even colored in grayscale until you hover them, making it harder to distinguish if they are disabled or not. This is not how you do good GUI design. You shouldn't have to hover anything in order to see if it's an enabled clickable element. It should be obvious as-is.

Perhaps Apple knows better than this? Nope.

For example, open a new tab in Safari. How do you close this tab? It's not immediately apparent... It turns out that the close button of the tab is completely hidden by default, and you have to hover the tab in order to make it visible. Apparently this is supposed to be hip and cool, but it's a completely counter-productive design that makes the usage of the program less obvious for no benefit.

The current window and the other windows used to be clearly distinguished from each other in the first versions of MacOS X. However, as newer and newer versions have been published, this distinction has been made more and more difficult to discern. At this point the active window has almost the same title bar coloring as every other window, making it very difficult to see which one is the active one currently. Why have they done this? No idea. It only makes using the system harder, for no benefit.

Recently Apple announced that they would hide scrollbars by default. This was a real facepalm moment for me. I was like, "WTF? Are they going to hide everything in the future? Just don't show anything. What exactly is the point?" What possible use there would be in hiding scrollbars? No longer can you see where in the page you are by looking at the scrollbars, without having to perform additional actions to make them visible. This is almost as much a braindead idea as what microsoft did with menus in Windows ME.