Which Technology is Best?

Over the last couple of decades, software development has very often been a constant debate of, "What technology is best?". By technology I mean the software tools, language, and medium with which you provide your software to your target audience.
To answer that overloaded question, we first need to understand the goals and current resources.

Customer questions:
Who is the customer?
What platforms do they have? (Windows, Mac, Android, iOS, XBox?)
What software do they already have? (Chrome, IE, FireFox, Flash, Unity, SilverLight?)

Designer questions:
What tools do they already know? (PhotoShop, After Effects, Maya?)
What are the design requirements? (Animation, Video, Particle effects?)

Technical artist questions (someone who takes a design and implements it in software):
Do they exist? (This role is often filled by either the designer or developer)
What work-flows and tools are they used to? (CSS, Flash, Expression Blend, Java FX Scene Editor?)

Developer questions:
How many users will our next milestone have?
How much load will the servers need to handler?
What languages do they know?
What frameworks do they know?
How many developers are there?

Project manager questions:
What is the minimum viable product?
What is the right balance between achieving MVP and planning ahead?

You can start to see the complexity of what sounded like a simple question. There are dozens of popular programming languages and technologies out there, and without understanding your task, there will never be a clear winner.
Now that you understand the goals, the resources available, and the customer, the first step is to list the possibilities. In order to do that, we first need to gain a shallow understanding of what technologies are out there, what they're typically used for, and check the pulse of that technology.
Use good judgement, don't start with the list of 300 or so languages in existence, find the current technologies designed for your target platform(s).

The first filter I use is what's commonly referred to as "checking the pulse" of a technology. An easy/lazy way to do this is just check Google Trends. Compare one language to another and see how they stack up. Of course this isn't perfect, and you can get misleading results when the name of the language is ambiguous, like "Go", but this can sometimes be a very fast way to exclude something from your research.

The graph above shows a Google Trends example -- This does not represent exactly which language is preferred or more popular, but it can be the fastest way to tell you whether or not something is relevant compared to something else. The way I would interpret this graph is just simply, "Java is a common search trend, Cobol, in comparison, is not." Do not try to look too deeply into the meaning of that.

Gauging the pulse of a technology can be difficult, there are a lot of factors in this, such as, "How long has it been around? How mature is it? Is it in its ascendency or descendency? How active is the community?"
Ask experts in the field, post on forums, what's the 'vibe'?

Your list should still be fairly long, but shouldn't include anything dead at this point. The next thing to use as a filter is to make sure the technology will work toward your goals. For example, I'm reasonably certain you can't make a website in Objective C, so don't try. If you have multiple targets (e.g. Web, Mobile, and Desktop), then it may be beneficial to find technologies that you can write once, and deploy everywhere, but understand that going that route is not always ideal. It may add dependencies across teams, limit flexibility and specialization for a given target, and lack optimization. There are many examples out there of successes and failures of this. Knowing what your initial release target here is important, if you intend to release on multiple platforms at launch, you should be iterating on multiple platforms in development.

What is your current team's training? It takes time mastering new technologies and work-flows. Even changing your teams up can add overhead and reduce velocity. Find the right balance between keeping skills fresh and up-to-date and keeping what's worked for you in the past.

Is it scalable? Do you need to scale? There are two meanings to this. Scalability in terms of number of users you can support, and scalability in terms of how much velocity do you lose as your project gets more and more complex.

In terms of how many users your software can support, this is mostly a server-side concern. That is, the technologies chosen for your server can be the difference between scaling to 100 users and 100 million users, but the client affects scalability only in how it interacts with the server. Picking technologies that scale better isn't always the right answer if it's unnecessary. For example, let's say you're making a website for a local semi-famous photographer. If you tried to make the website support millions of users per day by sharding your databases across a dozen clusters, you'd likely put somebody out of business after you find out that the website only gets a couple thousand visitors per month.
Typically technologies that will allow you to scale will add complexity. For example, PostgreSQL scales better than MySQL [citation needed], but is by nature more difficult to use, and if something is misused, it can prove to have worse performance than its simpler counterpart.

How complex can your software grow? Scalability in this sense means that as your feature set grows, and as the code-base becomes larger, how much time and difficulty is it to add or change things? This is largely dependent on a lot of things, such as how clean/stable is the architecture, how modular is the code (modularity is grouping code in ways that a change to one group will have no way to affect another unrelated group), how easy is it to gain confidence in a change that it is sound, and how many users does it effect?
Most of this answer has more to do with the experience of the developers involved, but some of it has to do with the technology. Some of the technology considerations I would speak to are - how fast can you iterate on a change? The faster you can iterate, the more creative control you have, and the more times you are likely to try something before pushing the change in. If a technology requires you to spend 10 minutes between each iteration (through an elaborate build process, or even if it takes a long time to navigate to the section of the application you made a change to), then you will spend much much less time trying out your change than if the technology had a 3 second iteration. I strongly believe that iteration is critical for any piece of software that has intangible creative qualities to it. The difference is proof through example and proof through induction. Whether something "feels" good can only be proved through example, there are no unit test or mathematical proof that you can apply to your code that will tell you that the user is going to like it. However, if you are writing a service that has a known output for any given input, then iteration is less important, you can essentially prove that it works before you ever try it. This why we can do things like put a man on the moon without killing ten thousand monkeys first (only a dozen or so).
The other key piece to consider - When do you know there is a problem? There are various stages that happen in software that let the programmer know there's a mistake.
1) As they type. For example can their tools point out syntax mistakes, misuse of APIs, potential errors?
2) When they run automatic verification. Various tools can check for different potential problems. Compilers and Linters will assert that syntax is correct, Unit tests will check if assumptions the developer wrote are still correct, Integration tests will check if parts of the system are still compatible.
3) When they run the software themselves. The faster the iteration time, the more this step happens.
4) Peer review - Mostly useful for verifying that standards are conformed to and knowledge sharing, but can also catch potential mistakes.
5) Continuous integration - Build systems can be set-up to run a battery of tests on the code.
6) QA - Quality assurance testers bang on it.
7) Public beta - A select sub-set of users bang on it.
8) Live - You really don't want your issues caught here. At this point if there's a problem you hope that the rounds going back from 1-8 are able to be executed in a timely manner.
Language choice affects #1-3 the most. The faster the iteration, the more coverage #3 gets, if the language is statically typed, the more coverage #1-2 gets.
Another critical piece to software scalability is how easy it is to understand. That is, if there are pieces to the puzzle that are very difficult to understand, chances are, they will at some point become very buggy. Things to pay particular attention to are "mystery meat" smells, places where "things just magically work", but it's difficult to understand why.

The question may be complex, but the answer is simple: Pencil and Paper.