testing – Simulate SqlConnectionException with Simmy and Entity Framework

I’m trying to use Simmy to test service resilience in case of failure. I’ve found Simmy which looks promising but my guess is that is can only be used along with Polly. Since my main point of failure can be the database connection, I would like to inject failure at the Entity Framework part of the code. Is that possible with Simmy? Or is there any other way to inject failure on the Entity Framework side for some chaos engineering?

testing – Why is the test runner not picking up tests for a contrib module?

I recently added tests to Cache Register, but the test runner on drupal.org doesn’t seem to be picking them up. The tests are all located in tests/src/Kernel (repo) but aren’t queuing up in the issue queue for either patches or merge requests. Am I missing something?

EDIT: Current answer references the namespace of the base test class, but I needed to do it that way to make the autoloader work for tests on my local. The tests themselves do actually use the correct namespace. I suspect that the way I’m implementing the base class may be the actual issue so I’m trying to figure out a way around that. Having trouble finding an example of a contrib module that uses a base class for its tests, though (actually it looks like Blazy uses this pattern for its kernel tests so this should be viable despite my local autoloader problem).

EDIT 2: Refactored that Base class into a testing Trait instead. Remaining “real” tests already use the right namespace and run locally, but still not getting picked up.

testing – d.org test runner not picking up tests for contrib module

Not sure, but I think your namespace is still using the old variant, looking at https://git.drupalcode.org/project/cache_register/-/blob/1.0.x/tests/src/KernelTests/CacheRegisterTestBase.php:

namespace DrupalTestsKernelTestscache_register;

Should be:

namespace DrupalTestscache_registerKernel;

The first link below also states:

PHPUnit-based: Generally, the other suites must be within a tests/src/$suite_type directory. These tests will have a namespace of DrupalTests$extension$suite_type. Note that there are exceptions to this naming scheme, illustrated below.

See also the documentation examples:

design – Using bats testing or “if statements” in an installation shell script?

Context

After building an installation script called main.sh that is a shell script, I would like to write 1 or more unit tests for most/each line of code in that script. Partially by inspecting the output of each command, and partly by testing the expected effects of the command execution. For example:

#!/bin/sh
sudo apt install task > install_tw.txt

Could be tested by determining whether the second to last line of the install_tw.txt file equals either:

taskwarrior is already the newest version (2.5.1+dfsg-9).

or

Setting up taskwarrior (2.5.1+dfsg-9) ...

and by testing the output of the task diagnostics command starts with task 2.5.1. If either of the two tests would fail, the installation could be aborted directly, with an insightful error message.

These checks could be performed by if statements main.sh, or by unit tests in that main.sh or by external bats files that contain unit tests, or by something else.

Doubts

Bats files can be an excellent form of performing unit tests on shell files. Though I have not (yet) seen unit test environments being used during the actual execution of a main code, not in shell, nor in python nor java. And here is an example on line 143 of a similar script to the one I wrote, from a source that I currently consider more knowledgable than me on “these kinds” of decisions, that uses if-statements instead of unit tests during the installation script to determine whether relevant conditions are satisfied.

Furthermore, I am slightly concerned that I might obfuscate my script by calling tests after very line, which might be a bit less intuitive/transparent than an if-statement that checks a/certain conditions.

Question

Would it be recommendable to use bats unit tests in the execution of basic/40-line installation script, to verify the installation progresses as intended, or should I perhaps consider another approach?

Advice for alternatives to unit testing when no expected value is known

I am writing several functions that I want to test. But, I have no way of knowing the true expected value, so I guess unit testing can’t be done here.

Any alternatives/solutions, or just expect to write correct code?

testing – How can I set up an a SQLite memory database to improve PHPUnit test speeds?

I’m trying to speed up a functional Drupal test.
Given a test…

...
  public function the_front_page_loads_for_anonymous_users() {
    $this->config('system.site')
      ->set('page.front', '/node')
      ->save(TRUE);

    $this->drupalGet('<front>');

    $assert = $this->assertSession();
    $assert->pageTextContains('Welcome to Drupal');
    $assert->pageTextContains('No front page content has been created yet.');
  }

This test runs really slow against my database … 2+ mins.

If I use a test sqlite database, it drops to around 1.6 mins
<env name="SIMPLETEST_DB" value="sqlite://localhost/logs/test.sqlite"/>

If I use a in-memory database, <env name="SIMPLETEST_DB" value="sqlite://localhost/:memory:"/>, Drupal detects no database and runs install.php

How can I set up an sqlite memory database to improve test speeds?

testing – What’s the best way to get gitlab docker runners and python top to work together?

I’m trying to get a better understanding of how Tox and GitLab CI (with docker runners) would work together, as they seem to have a bit of overlap in what each does. I think I may be missing something on exactly the purpose of Tox as well.

Here’s Tox’s stated purpose:

tox is a generic virtualenv management and test command line tool you can use for:

  • checking that your package installs correctly with different Python versions and interpreters
  • running your tests in each of the environments, configuring your test tool of choice
  • acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing.

That last item is what I want out of it when using GitLab CI. But if I’m using docker runners, the virtual env stuff seems extra, and redundant. I assume I’m not the first person to notice this, and there is a recommended way to configure tox to be less redundant. I wasn’t able to find information on that so far, though.

What’s confusing me is that both GitLab CI and Tox setup and configure test environments, and then execute several different runners. I would like to use GitLab CI for most or all of that part, as it would enable better UI integration, and allow using multiple job runners. I could even use different python docker images with different versions, instead of virtualenvs. That makes me wonder if Tox gives any benefit at all…

What I’ve found though is just running Tox in a single job for each job type (unit, lint, etc.). (I may have missed some more complex examples, and that’s what I’m looking for.)

So I want to better understand how to get these two tools to work together, and if they even should – does GitLab CI do pretty much already do what tox tries to do, or is there a way to use any unique strengths of both without too much redundancy? It would be great if you knew of any examples of projects that used both tools well, and how/why that was. Also, any case studies you might know of would be great, to illustrate what issues might come up. Also, does this vary for application vs library development? (I’m currently focused on library development, but do both).

abstraction – Do kotlin libraries with inline APIs encourage high coupling and discourage unit testing?

As an example, let’s assume our application needs some way to communicate with other systems using HTTP.

interface HttpClient {
  fun <T> get(url: String, returnType: Class<T>): T
  fun post(url: String, body: Any)
}

Now it seems like good practice to have our components depend on this interface and not on an implementation of the HttpClient – this makes it very easy for us to swap clients, either at author time or if you’re being fancy even at runtime.

We decided to change our http client and really liked the way the Ktor client looked. The examples look great, using inline functions with reified type parameters definitely make the code much more readable. The public API Ktor (and, in fact, most other Kotlin implementations of http clients) provides looks something like this:

public suspend inline fun <reified T> get(url: String)

and calling these methods is nice

httpClient.get<MyResponse>("https://google.com")

compared to “the old fashioned java way”

httpClient.get("https://google.com", MyResponse::class.java)

However, this kind of API seems to make it impossible for me to simply swap HTTP clients, since inline functions cannot be used in interfaces, because the implementation used is chosen at runtime and inlining happens at compile time. In the examples of libraries like Ktor you see how easy it is to create these clients and use them, but in real projects you don’t want every component to create & configure its own HTTP client itself, you would want to delegate this to some abstraction. Therefore, I argue, offering an inline only API forces you to couple classes that need to communicate through http to the implementation the library offers. This can be fine – I just think there should be a non-inline alternative offered as well. I could add another layer of abstraction over the http client, something like MyApi, but then I’ve pretty much lost all the convenience.

Wouldn’t it make sense for Ktor (and many other libraries) to also offer an API like this to support a more decoupled design?

public suspend <T> fun get(url: String, Class<T>)

Secondly, I think these inline APIs make testing much harder. All of a sudden you don’t have a separate component that you can potentially mock, you have code that will be inlined into the code that calls it. For example, this won’t work anymore (example using the Mockk library)

every { httpClient.get(....) } returns SomeResponse()

Because of course, there is nothing left to mock – the get method doesn’t exist anymore in the bytecode.

I haven’t been able to solve this problem. You could argue that the httpClient shouldn’t be mocked, but in most cases I don’t want to go through the hassle of including the http client in my test – I have that tested separately and just want to test what my component does when the http client returns this response.

Ultimately my question is, do these kind of APIs in libraries hurt clean software design, or are there good ways to solve the problems I have described?

AI in Testing: Do You Need It?

Artificial Intelligence has not only impacted industries, but it has also empowered the SDLC for faster development and deployment. The combination of test automation and AI has numerous advantages. Learn a few of those advantages to enhance your software testing processes with AI capabilities.

testing – How should I have to name the table headers, in order to make Behat create content with a field address properly filled?

I’m writting a Behat test, where I’m triying to create a content that uses a field address, generated by Address Module. I’m using Drupal 9.0.7, Address Module 1.9.0 and behat 3.8.1. My goal is to create this type of content in my scenario; I’m using a behat table for this, and a Given statement like the following.

   Given place content:

     | title   | field_address:country_code |field_address:address_line1 | status |
     |<<title>>| es                         | Behat test street          | 1      |

field_address:country_code and field address:address_line1 are sub-fields of the address field.

This way of naming the sub-fields doesn’t seeem to work.

How should I have to name the table headers, in order to make Behat create content with a field address properly filled?