Thursday, July 27, 2017

Postmodernism

In order to understand Postmodernism, we first have to understand what Modernism is.

Modernism is a kind of umbrella term used to describe the quite radical changes in sociopolitical, philosophical and scientific concepts in the western world, which started in the 18th century, and which then eventually pretty much expanded to much of the rest of the world.

The philosophical and scientific aspect of this zeitgeist started with the so-called Age of Enlightenment in the 17th and 18th centuries. Prior to this time, religion, philosophy and science were all considered pretty much parts of the same thing, tightly tied to each other. The approach to science was highly presuppositionalist (heavily driven by the religious and philosophical ideas of the time, with any scientific theories being controlled by these presuppositions).

The Age of Enlightenment, however, caused a massive change in the philosophy of science. It was the time when the modern scientific method was developed. Religious and philosophical presuppositionalism was discarded, and replaced with evidence-based and fact-based methodology. (In other words, the direction of scientific research was effectively reversed: Rather than assume a conclusion and trying to corroborate that conclusion with research and testing, instead the research and testing is done first, and then the conclusion follows from that, without prejudice and presupposition.) This philosophy of science developed modern scientific skepticism: No claim is accepted without sufficient valid evidence, and all claims must be based on facts and actual measurable and repeatedly testable and falsifiable evidence. This was in drastic contrast with prior centuries, where religious presuppositionalism was rampant, and scientific rigor was almost non-existent.

This caused an inevitable and pretty much total separation between science and religion, and, also, pretty much effectively a split in philosophy. Where previously religion, philosophy and science were considered just aspects of the same thing, now they were completely separate.

On the sociopolitical side Modernism refers to the ever increasing animosity within the general public against royalty, nobility, and overall any form of governance that was inherited, elitistic, aristocratic, and oligarchic. The culmination of this unrest was the French Revolution at the end of the 18th century, where the rule of the absolute monarchy in France was replaced by a republican government.

This revolution was extremely significant in world history because it triggered the global decline of absolute monarchies in the west, and eventually pretty much the entirety of the world, replacing them with democratic governments, where the government is elected by the people, from among the people, rather than it being owned by an oligarchy by birthright. While royalty and nobility still exist in many countries even today, they (especially the latter) do not hold any significant power, and are mostly nominal and customary.

On a perhaps more abstract level, some of the fundamental characteristics of Modernism are objectivity, (scientific) skepticism, equality, and meritocracy. The world and human society is judged by facts and hard testable evidence, morality and legislation is sought to be as objective as possible, people are treated as equally as possible, and societal success ought to be based on personal work and merit rather than birthright and class. This is the era of science, technology, universal human rights, democracy, and the modern judicial system.

Postmodernism, on the other hand, is a much fuzzier and harder to define concept. And, in my opinion, almost completely insane.

One of the driving ideas behind postmodernism is the concept of subjective truth, and to a degree, a rejection of objectivity and science. Postmodernism is often summarized as "all truth is relative".

The notion of subjective reality in postmodernism can range from the absolutely insane (and therefore innocuous, as it has no danger of affecting society) to the more down-to-earth mundane things (which, conversely, can be a lot more dangerous in this sense).

The most extreme (and therefore most innocuous) form of this is the notion that some people have that the universe, the actual reality we live in, is personal and subjective. We make our own reality. If we think hard enough about something happening, it has a higher chance of happening, because we shape our own reality, our own existence, with our minds. Evidence-based science is rejected as antiquated and closed-minded.

That kind of philosophy is not very scary because nobody takes it seriously. Especially not anybody with any sort of power to impose it onto others and, for example, create legislation based on it.

However, there are other forms of postmodernist notions that are much more dangerous and virulent. Perhaps no other example is better and more prominent than the postmodernist idea of human gender.

In modernism, gender is defined by what can be measured and tested. It's a cold and hard scientific fact. We can take samples and put machines to inspect them, and see what the samples consist of. We can observe and measure the human body, its biology and its functionality. Everything can be rigorously observed, measured and tested.

In postmodernism, however, gender is defined by the subjective feelings of the person. Not only is the gender of the person not measurable by any machine, moreover there aren't even two genders, but however many each person wants there to be. People can freely make up new genders for themselves as they see fit and feel like. It doesn't matter what the test tubes and machines say, all that matters is what the person says and feels.

The very concept of "gender" cannot be scientifically stated, based on measurements, facts and testable evidence. Science is completely rejected in this (unless it is twisted for political purposes to support the notion).

Unlike the first extreme example of postmodernism earlier, this one is much scarier because it has much more influence in the actual world. It has much more influence, in a much, much wider scale, on how people behave and, perhaps most importantly, what kind of legislation is enacted, and how eg. schoolchildren are taught and treated. Nobody in their right mind would demand that schoolchildren be taught that the universe itself is shaped by whatever we think and want. However, schoolchildren at many places are already been taught that they can create their own genders as they wish, and to completely disregard science on this matter.

The really scary thing about it is how virulent the idea is. School after school, university after university, and government after government is embracing this form of postmodernism, and some countries are already enacting laws to enforce it. And there seems to be nothing to stop the insanity from spreading.

As mentioned, the concept of gender is but just one egregious example of postmodernism. There are many others. And they are getting more and more a hold in our society, undermining factual objective science.

Friday, July 21, 2017

Nintendo's biggest mistake: The PlayStation

Surprisingly few people know the history of the Sony PlayStation line of consoles. This might not be news to tech-savvy console aficionados, but it's nevertheless quite little known.

The fact is, Nintendo effectively created the Sony PlayStation.

Or, more precisely, caused it to be created.

You see, back when the SNES was at the end of its lifespan in the early 1990's, its greatest competitor, the Sega Genesis, had a CD peripheral (which could contain an entire CD worth of a video game, including CD quality sound and some primitive video footage.)

So in order to compete with it, Nintendo wanted to also create a CD peripheral for the SNES. So Nintendo partnered with Sony to create such a thing. The tentative name for this peripheral was, and I kid you not, PlayStation.

However, the corporations got into some kind of argument, and they dissolved the partnership. But rather than just forget about it, Sony decided to create their own console. The Sony PlayStation. Which was, unsurprisingly, CD based. (How they got to keep the name, I have no idea. Maybe some kind of deal between the corporations.)

The rest is, as they say, history. The PlayStation came to be one of the most successful consoles of the era, and its successor, the PlayStation 2, became the best-selling console in history, so far. The PS3 and PS4 are not far behind either.

I wonder if Nintendo is kicking themselves because of this.

(Although, in retrospect, this might have been a blessing for gamers. Nintendo consoles are not exactly known as the platforms for badass games for hard-core gamers. They used to be, back in the SNES era, but not in a long time.)

Sunday, July 16, 2017

What is falsifiability in science?

Many people think that science works by formulating a hypothesis (based on observation and measurement) about a particular natural phenomenon, and then trying to prove that hypothesis correct. While that might sound very reasonable at first glance, it's actually a very naive and even incorrect approach. It's an incorrect approach because it may lead to the wrong conclusions because of confirmation bias.

Rather than trying to prove the hypothesis, the better method is, as contradictory as it might sound at first, try to disprove it. In other words, don't construct tests that simply confirm the hypothesis; instead, construct tests that, if successful, will disprove the hypothesis, show that it's wrong.

And "trying to disprove the hypothesis" is not always as straightforward as "if the test fails, it disproves the hypothesis". In many cases the hypothesis must be falsifiable even if the test succeeds.

An example of this is controlled testing. It might not be immediately apparent, but the "controls" in a "controlled test" are, in fact, there to try to disprove the hypothesis being tested, even if the actual test turns out to be positive (ie. apparently proving the hypothesis correct).

A "control" in a test is an element or scenario for which the test is not being applied, to see that there isn't something else affecting the situation. For example, if what's being tested is the efficacy of a medication, the "control group" is a group of test subject for which something inert is being given instead of the medication. (In this particular scenario this tests, among other things, that the placebo effect plays no significant role.)

If the medication were tested without a control group, a positive result (ie. the medication apparently remedies the ailment) would be unreliable. It might look like it's supporting the hypothesis, but it doesn't take into account that there might be an external factor, something else (eg. the placebo effect), that caused the positive result instead of the medication.

It's very important that hypotheses can be proven wrong in the first place. It's very important for it to be possible to construct a test that, if positive, actually disproves the hypothesis (or, at the very least, a test that if negative, likewise disproves it.)

That is the principle of falsifiability. The worst kind of hypothesis is one that can't be proven wrong, ie. when there is no test that would show it to be incorrect.

For example, if somebody believes in ghosts and spirits, ask them if there is any test or experiment that could be constructed that would prove that they don't actually exist. I doubt they could come up with anything. The same is true for psychics, mediums and the myriads of other such things. They will never come up with a test or experiment that, if positive, they would accept as definitive proof that those things are not real. (Any results of any experiments on these subjects will be dismissed with hand-waving, like the psychic not feeling well, or whatever.)

The hypothesis that ghosts exist is pretty much unfalsifiable. While people can come up with experiments that, if positive, would "prove" their existence, not many can come up with an experiment that would disprove it. And that's a big problem. The "positive" experiment results are not reliable because, like with uncontrolled medical tests, they don't account for other reasons for the observed results.

That's why it's more important to be able to prove a hypothesis wrong than right. If numerous negative experiments (ie. ones that if successful would prove the hypothesis wrong) fail, that will give credibility to the hypothesis. But if no such experiments are possible, then the hypothesis becomes pretty much useless.

Tuesday, July 11, 2017

Bill Nye is a liar

Bill Nye is a somewhat famous "science communicator". Meaning that while not a professional scientist per se, he helps popularize and inform the public about scientific matters. He is most famous for his 1990's TV series "Bill Nye the Science Guy".

For some reason in later years he has become quite badly "blue-pilled" (ie. an advocate of modern feminist social justice ideology). In the absolutely infamous 9th episode of his newest show, "Bill Nye Saves the World", he advocates for "gender fluidity", and how there are billions of genders and sexes and whatnot. The episode is an absolute cringefest (and I'm not just saying that; it really is. You have to see it for yourself.)

Many people have criticized it for, among other things, dishonesty. For example at one point in the episode he says:
"These are human chromosomes. They contain all the genes you need to make a person. This one is called an X chromosome, and that one down there, that's a Y chromosome. They are sex chromosomes. Females usually have two X's and males generally have an X and a Y. But it turns out about one in 400 pregnancies has a different amount of sex chromosomes. Some people only have one sex chromosome. Some people have three, four or even five sex chromosomes. For me that sounds like a lot. But using science we know that sex and every aspect of human sexuality is.. well, it's a little complicated."

Bill Nye is implying here that the difference between the sexes is somehow fuzzy, and that there may be multitudes of different sexes. What he is doing here is lying by omission.

He is making it sound like having an unusual number of sex chromosomes is somehow normal, and that there's absolutely nothing special or wrong (biologically speaking) about it, other than the other combinations of chromosomes being a bit less common. While he doesn't outright say it, he seems to be implying that from the everyday people you encounter out there, just normal people, just like anybody else, some of them may have a different number of sex chromosomes, and you couldn't tell the difference (other than, I suppose, that they might be more effeminate or more masculine than expected, or be in some other ways of ambiguous gender.)

What he is quite explicitly not telling is that having a different amount of sex chromosomes from the normal is actually a congenital defect, a birth defect, often with health and/or developmental consequences.

While some people with an unusual number of sex chromosomes may well turn out to be completely normal and healthy, and never even realize there's something unusual about them, the most common side-effects of this are infertility, stunted mental development (such as learning disabilities), stunted growth and lower life expectancy. And those are just the mildest side effects. Severe developmental deficiencies, and significantly heightened risk of all kinds of diseases (such as cardiovascular diseases) are also common. The list of possible symptoms is really extensive. And the more the number of sex chromosomes varies from the normal, the more common and severe the symptoms are. The more severe the variation, the less likely it is for the person to even survive to adulthood.

In fact, the vast majority of pregnancies with sex chromosome disorders end in miscarriage or stillbirth.

But Bill Nye doesn't convey any of this to the viewer. Instead, he gives the impression that people with an abnormal number of sex chromosomes are just normal healthy everyday people that you meet every day, and wouldn't even recognize outwardly as being such.

Bill Nye's hypocrisy is also heavily criticized because in his original "Bill Nye the Science Guy" TV series there was an episode dealing with genders and sex chromosomes, which stated, clearly and repeatedly, that there are only two sexes, period. This segment was completely cut out from the Netflix re-release of the series.

Monday, July 10, 2017

New Nintendo 2DS XL and Nintendo's marketing strategy

When Nintendo released their previous-gen console, the Wii U, they botched their marketing strategy almost catastrophically. The Wii U was, indeed, a completely new "next-gen" console in the Nintendo line, ie. in the same "console generation", ie. the 7th, as the PS4 and the Xbox One. It was not just a slightly fancier version of their previous-generation console, the Wii (which competed with the PS3 and the Xbox 360).

Nintendo botched the marketing because they didn't make it clear enough to the wider public that yes, this was indeed an entirely new console, a "next-gen" console, not just a slightly upgraded Wii. This has been estimated to be one of the reasons for the relative commercial failure of the Wii U. People were simply confused, as they thought that it was just some kind of Wii with an extra touch-based controller, or something. Many casual non-tech-savvy Wii owners didn't see the incentive of buying (what they perceived as) just another version of the same console.

Nintendo, perhaps having learned their lesson, did significantly better with the marketing of their next console, the Nintendo Switch. Massive advertisement campaigns quite cleverly made it quite clear that this is, indeed, an entire new console, the next "big one" from Nintendo. The real replacement for the old Wii.

Their marketing was so successful, in fact, that as far as I understand, the Switch broke the record of the fastest-selling console in its first week/month in history. If I remember correctly, the million units sold landmark was reached in just a few days, which is faster than any other console in history, including the PS4.

But regardless of this incredibly successful marketing campaign, it appears that Nintendo might be falling into their old habits.

The Switch was originally intended to be a merging of Nintendo's two major console lines, ie. the desktop consoles and the handheld consoles. The Switch was supposed to be, and is, a hybrid of the two, and can work as both, and thus ought to work as the next-gen replacement for both.

What that should mean, in turn, is that Nintendo's, and all third-party developers, focus ought to be concentrated on the Switch, with the Wii/Wii U and the 3DS being slowly phased out as the obsolete "last-gen" console pair. The Switch is now the next-gen console from Nintendo, for which all new games, in increasing numbers, will be made, handheld or otherwise. This was what many early Switch buyers were (and are) expecting.

But now Nintendo seems to be giving mixed signals about this, after all.

Some months ago there was a rumor that some Nintendo executive may have given hints that this might not, after all, be the end of the handheld 3DS line, and that there might be a "next-gen" version eventually, in parallel with the Switch. Of course this was just a rumor, and I don't know how reliable it was, nor have I heard of it since. Only time will tell.

Anyway, rumors aside, Nintendo just recently published a new version of the 3DS: The New Nintendo 2DS XL (that's a mouthful). This is essentially a New 3DS XL (which is a version of the New 3DS with larger screens, which in itself is an upgraded version of the 3DS) with a slimmer design and without the stereoscopic 3D effect. (The major advantage of it is, quite obviously, a somewhat cheaper price, compared to the New 3DS XL.)

In other words, a bit over three months after they published the Switch, they now published another version of the 3DS. This seems to signal that Nintendo is still intending to support the system for at least a few years to come, rather than having it end its natural lifespan as people move to the Switch.

Many critics, and Switch owners, are worried that this means that Nintendo is not, after all, dedicating all of their time, resources and effort on Switch development, but that it will still be shared with the 3DS line. It also might signal to 3rd-party developers to do the same.

Couple this with criticism from many major game developer companies that the Switch is not a very good platform to develop big modern triple-A titles for (because it's much less powerful than anticipated), and it only strengthens the reasons for the worries. Millions of people bought the Switch because they anticipated it being the next big thing. Will it, however, turn into just another Wii U in terms of a library of games and overall support?

Nintendo is giving very mixed signals here. I don't think they should, at this point.

Saturday, July 8, 2017

In defense of the "waterfall model" of software development

Software development processes are higher-level ideas and principles on how to develop a piece of software (or any system based primarily on computer software) for a required task. For very small projects it may be enough to just have a need, and start coding a solution for it. However, for even slightly larger projects this becomes infeasible very quickly, especially when many people are involved in the project. (When more than one person is involved, it immediately introduces management problems, so that every participant knows what to do and when, etc.)

The so-called "waterfall model" is one of the oldest such development models ever devised, going as far back as the 1950's. While there are many versions of this model, differing in details and number of steps, the distinguishing characteristic of the model is that it consists, essentially, of big sequential stages, which are usually followed in strict order (ie. the next stage isn't started until the previous one has been finished.)

A typical simplistic example of such stages could be "requirements" (figuring out and documenting what exactly the software needs to do), "design" (planning out how the software should be implemented), "implementation" (actually writing the software), "testing", and "maintenance". Part of testing is, of course, fixing all the bugs that are found (so it partially loops back to the "implementation" step, sometimes even to the "requirements" step, if it turns out that some of the original requirements are impossible, contradictory, or infeasible.)

For decades the waterfall model has been generally considered antiquated, simplistic, rigid, and above all, inefficient. Countless "better" models have been devised and proposed over the decades, most of which promise more efficient development with higher-quality outcomes (ie. less bugs, faster development, and so on.) If you ask any expert software engineer, they will invariably dismiss it as completely obsolete.

As a long-time professional small-business software developer, however, I would argue that perhaps the bad reputation of the waterfall model may be undeserved, and that it could, in fact, be the best model in many projects, especially smaller ones (somewhere in the ballpark of 20-200 thousand lines of code, a few months of development time.)

The absolutely great thing about the waterfall model is that the requirements are figured out and documented in full, or almost full, before writing the software even starts. While perhaps not written in stone and never changed again during development, it should at least lock the vast majority of the features, preferably to the tiniest of detail.

The great thing, as a programmer, about having such a complete and detailed requirements document is that once you start implementing the program, you have a very clear, complete and unambiguous picture of what needs to be done, and you can design the program from the get-go to accommodate all those requirements and features. You can plan, design and implement your module and class hierarchy, and your modules and classes, to fluently and cleanly support all the required features right from the start. When well done, this will lead to nice, clean and logical class hierarchies and class interfaces, and to a more robust and understandable program design overall.

Also very importantly, having an almost-complete document of requirements, which means that you know exactly what needs to be done, means that the implementation of the program will be relatively straightforward and fast. Usually the actual implementation does not take all that much time, when everything that has to be done is clear from the get-go.

If such an almost-complete requirements stage and document is not made, however, it easily leads to, essentially, the software development equivalent of "cowboy programming" and "spaghetti code". It will also almost inevitably lead to actual "spaghetti code" in the program implementation.

In other words, if the project starts with just a vague idea of what should be done, and the concepts for the project evolve during its implementation, with new ideas and features being conceived as the project progresses, this leads almost inevitably to absolutely horrendous code, no matter how well you try to design it from the start.

What's worse, the implementation will take a very long time. Existing code will constantly need addition, changes and refactoring. Existing code will often need to be redesigned to accommodate new requirements (which were impossible to predict at the beginning).

This can turn nightmarish really quickly. Sometimes even a simple-sounding new feature, which might sound like it could be implemented in minutes, might take several hours to implement, just because the existing code was not prepared to support that feature.

This kind of software development is far from fun. In fact, it can be absolutely horrendous. And horrendously inefficient. New requirements and new ideas keep pouring in at a semi-regular basis. Some of them take some minutes to implement, others can take several hours. Many of these ideas are just testing to see if they will work, and may be dropped later, after it's decided that the idea didn't work well after all. Essentially, the software implementation is used as a testbed for new ideas, to try to see if they will work; and if they don't, they are just discarded. This ends up in countless hours of wasted development time.

And of course as a result of all this, the program becomes a nightmare of an absolute mess. No matter how much you try to keep it clean and high-quality, there's no avoiding it, as hundreds and hundreds of new and changed features are patched into it, sometimes haphazardly because of necessity.

When one is involved in such a project, one really starts to yearn for a waterfall model requirements documentation, which would make implementing the program so much easier and faster.

Personally, I would change these new poorly designed, poorly enacted "modern" software development models for a good old waterfall model any day, if it means that I would have a clear and complete picture of what needs to be done right from the get-go, with little to no changes made during or after development. It would make it so much easier and faster, and the end result would be of much higher quality. The whole project would also probably take a lot less time.

Tuesday, July 4, 2017

The most over-hyped movie in history

Public and/or marketing hype for a work of art is definitely a lot more common with video games, but movies also get their share from time to time, especially if it's a new movie for a popular franchise (and especially if it really is new, as in, the first one made for the franchise in a very long time).

What is the movie that was the most hyped, in the entirety of movie history? There are, of course, many candidates, but I would propose the Star Wars Episode 1: The Phantom Menace wins in that category.

The original Star Wars trilogy is, for one reason or another, one of the most influential set of movies in recent popular culture. Very few other movies or franchises parallel its success and pervasiveness. In the 80's and largely in the 90's Star Wars was everywhere, and everybody knew what it was. It was almost impossible not to. And the amount of fans was staggering.

However, the third movie in the trilogy, Return of the Jedi, was released in 1983. Since then a myriad of spinoff movies, TV series, comic books and so on were made, but nothing that continued the actual movie canon.

When it was announced that a new movie in the main canon franchise would be released in 1999, after a hiatus of a whopping 16 years, fans went absolutely crazy.

Perhaps no other sign of this is clearer than the fan reaction to the trailer of the upcoming movie. The trailer itself is, it must be said, a work of art in itself. It's pretty awesome all in itself, but especially so in 1999, from the perspective of the starving fans.

The trailer was, in fact, so popular that, and I kid you not, many people were buying movie tickets to other movies just to see the Phantom Menace trailer at the beginning, and then leaving after the trailer was over. (1999 was still by far pre-YouTube time, and even a time when the majority of people didn't even have an internet connection at all, much less one that allowed downloading huge video files, so the vast majority of people had no way to see the trailer anywhere else than at a movie theater.)

Fans camped outside of some movie theaters literally for weeks prior to the premiere of the movie, and these camping tent lines were astonishingly long. (While doing this was not unprecedented, I wouldn't be surprised if this was the largest such an event in movie history, in terms of the number of people in these lines, and how long they were there.) The first time the movie theater doors opened on the day when they started selling tickets for the movie was a spectacle in itself, and got news coverage (which in itself is quite rare).

Of course the movie itself turned out to be... somewhat mediocre in the end, and the reception to be lukewarm at best. As one critic put it years later, "it looks like Star Wars, it sounds like Star Wars... but it doesn't feel like Star Wars."

The reception was, of course, a bit more positive among the die-hard fans themselves, at least at first. Similar queuing lines happened at the premieres of the two other movies in the new trilogy, but they weren't even nearly as massive. (They still were quite massive, especially at the premiere of the second movie, but not as much.) I think that there was a kind of mentality where the die-hard fans were hoping that the two subsequent movies would be much better, and were at some level in denial about how mediocre the first movie was (perhaps because they didn't want to admit even to themselves how overly hyped they got for a movie that was somewhat of a disappointment in the end.)

Years later I don't think even many fans are considering the trilogy in general, and especially the first movie in particular, to be all that good. Quite a disappointment in the end.

But the pre-release hype surrounding the movie was, in my view, unprecedented, and so far unparalleled.

Why is HDMI 1.4 so common in 4k displays?

4k displays (ie. 3840x2160 resolution) are all the rage nowadays. More and more display manufacturers are making their own 4k products.

There is one thing that I have noticed about many of them, however: Many, even most, of these displays are using HDMI 1.4, rather than HDMI 2.0. Which makes little sense.

The major difference between the two versions is bandwidth. HDMI 1.4 does not have enough bandwidth to display 4k video at 60 Hz (in uncompressed RGB format). It only has enough bandwidth to do so at 30 Hz. HDMI 2.0, on the other hand, has the required bandwidth for 4k@60Hz.

It's never a question of the display itself being incapable of displaying 4k content at 60 Hz, as invariably they support this through their DisplayPort connection. It's only the HDMI connection that limits the refresh rate to 30 Hz.

Some 4k displays do support HDMI 2.0, but for some reason they seem to be a minority at this moment.

This is problematic for several reasons. Firstly, it forces PC users to use DisplayPort rather than HDMI. Ok, perhaps not such a big deal.

But secondly, and more importantly, both the PS4 Pro and the Xbox One X have only an HDMI 2.0 output port. They do not have DisplayPort support. This means that you can't use one of these HDMI 1.4 displays with them, if you want 60 Hz in RGB mode. (At least the PS4 Pro supports 4k@60Hz with HDMI 1.4, but only in YUV420 mode, which has reduced lossy colors, making colors less vibrant and with artifacts.)

I can't really understand why monitor manufacturers are doing this. Sure, they probably have to pay more in order to use HDMI 2.0 (AFAIK it's not free to use), but I doubt it's that much more.

Moreover, many manufacturers are outright hiding which version of HDMI their monitor is using. Many of them only list "HDMI" as supported input, without specifying the version number, anywhere. If you want to be sure and find out, your only recourse is to try to find some third-party review that mentions this.

Although, at this point, it's probably safe to assume that if the monitor manufacturer is not telling which version of HDMI they are using, it's probably 1.4.

Saturday, July 1, 2017

Gender discrimination in Australian Public Service hiring?

The Australian Public Service is a branch of the Australian government that provides services to almost every part of Australian life. 

In 2016, women comprised 59.0% of the APS as a whole, but accounted for only 42.9% of its Senior Executive Service officers. Is this clearly a case of gender bias (deliberate or unconscious) in hiring?

A governmental study sought to find out, by testing with applications and CVs that had no identification of the gender or any other characteristic of the applicant.

The results were surprising. There was indeed bias when applicants were identifiable. But in the other direction. As in, women were more likely to be shortlisted (ie. accepted for the next step in the hiring process) than men. Not by a lot, but measurably so (2.9% more, according to the study). Even moreover, and perhaps more surprisingly, male reviewers were more likely to shortlist female applicants than female reviewers.

Of course this meant that when the reviewers did not know the gender or any other personal characteristics of the applicants, ie. when this information was omitted from the CVs, and thus the reviewers could not show any favoritism or bias, women actually became less likely to be shortlisted and more likely to be rejected.

This, of course, means that the feminist theory that women being a minority in managerial positions as being caused by misogynist bias, is at least in this case completely false. On the contrary, there is already bias in favor of women, rather than against them. Yet they still form a minority in the top positions.

I find the conclusion of the study interesting:
"Overall, the results indicate the need for caution when moving towards ’blind’ recruitment processes in the Australian Public Service, as de-identification may frustrate efforts aimed at promoting diversity"
I don't think there could be a more direct way of saying that "we need to deliberately favor women in hiring over men, if we want to promote diversity".