Ruby is very popular as an automation tool.
Ruby’s creator, Yukihiro Matsumoto, had what seemed like great ideas: programmer productivity and fun, and that (to quote Wikipedia, quoting Matsumoto)
“But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves.”
It’s hard to argue with that…
I learned Ruby when I was hired to support an ongoing “test automation” project at DocuSign. Ruby did not make me happy; it frustrated me with an opaque installation process, poor design, redundant yet inconsistent syntax, it only pretended to be object-oriented, poor documentation, limited capabilities, and the fact that it was a script language and interpreted at runtime.
One of the selling points of a script language is that, if there is a failure or other issue, one can just touch up a text file on the target tier and run it again. The problem here is, where is the code version control? Is the source code history even consistent across tiers, or discoverable? If there is a failure, and a hurried engineer buries it, it can’t ever be diagnosed or prevented in future.
The biggest problem I have with Ruby is that, as a runtime-interpreted language, it defers errors to runtime. Issues that a good IDE would tell you about immediately, the interpreter doesn’t tell you about until later – in one case, for a multi-tier check I developed, 15 minutes later. This adds up to a lot of wasted time, and therefore quality risk.
I found that Ruby was quite capable of driving the SUT, but poorly equipped to gather data for later analysis, for example, helping with information on failures in the automation script or in the SUT. It seems well suited to “test automation” if one makes the common mistake of confusing “test automation” with industrial automation. For the latter, the value is in the product output, i.e., what is produced by the robot or stamping machine or whatever, but for the former, the value is in behavior and quality measurements of the SUT but otherwise nobody cares about the product output.
Ironically, on this team I noticed that the way Ruby was used for “test automation” was ridiculously non-scalable: an engineer would run the Ruby script and stare at a computer screen to monitor how it made the SUT behave in the GUI. Well, yes, Ruby is effective for that kind of application … (sobs)
The team’s use of Ruby was not trustworthy. There was no way it could be.
Scripting languages are useful for light-weight applications with frequent changes.
By contrast, I use the C# language of .Net. This is similar to Java, but more modern: more purely object-oriented, more consistent syntax, more complete runtime typing, and platform-independent as well.
One does not need a heavyweight IDE to work in C#, but I use a free version of Visual Studio.
The compiler tells you about many problems immediately, especially in an IDE, saving a lot of time and frustration. The compiler is your friend.
Ruby was intended to be fun, but C# is vastly more powerful in many ways, and supports a much more trustworthy system.