Considerations When Selecting GxP Critical Software

In this article, I will go through some of the considerations a pharmaceutical, or otherwise regulated, company should go through before selecting a software vendor.

Background

First a little about my background. I have a Master Degree in Software Engineering and have been developing validated GxP software for the last 11 years. During that time, I have gained extensive experience as a consultant for companies buying or validating software. I have been involved in everything from validating spreadsheets to being project manager for the validation process of a cloud-based QMS system. I consider myself to have a solid background regarding this subject – and, most importantly, experience from both sides of the table.

GxP Requirements

In my opinion, bad requirements leads to bad software. Good requirements leads to much greater chance of a successful software project. Therefore the requirements are very important, so lets talk a little about requirements.

The quest for software usually, and hopefully, starts with a need within the organization. Based on this need, ideas start to form and soon after new needs arise. At some point a set of requirements are written for the software.

I have seen companies where one person sits down in front of a PC and writes all the requirements alone. This will often produce a list of requirements with a lot of gaps, misunderstood functionality and a one-sided focus – and that often turns into a disaster of a system.

I have also seen companies where every employee, external consultants, partners or anyone else just remotely associated with the process, are asked for requirements and opinions. Everything is then combined into a huge list and provided to the vendor. It is needles to say that such software most likely doesn’t exist and the development costs would exceed the value of the system by a huge margin.

The optimal way of writing requirements is by compiling a combined list of requirements from all parties directly affected by the system. Then assemble a small group of max. 5 people to sort through the requirements and rewrite as needed. Each requirement should then be evaluated carefully with regards to necessity. Do this by ranking them where the highest rank is a deal-breaker and the lowest ranking is just an annoyance if not included in the software. When developing software, a rule of thumb is that 20% of the requirements cost 80% of the money. So the more realistic and flexible the requirements are, the bigger are the chances of getting a system with a decent return of investment.

Development CostsAnother important aspect regarding requirements: Specify what you want, not how it should be implemented. Of course there are exceptions to this, but consider the following two requirement descriptions

  1. The background in the testing environment must be red on all pages
  2. The testing environment must be clearly distinguishable from the production environment to avoid mistakenly using the incorrect environment.

The two requirements essentially solves the same problem, but the first one dictates how it should be implemented.

Integration

A requirement I often encounter, are “Should integrate with SAP”. This requirement is extremely vague. It could mean polling an SAP server to verify it is still running, or it could mean a dynamic interface where the customer can define their own integration logic.

It is very important to be as precise as possible, especially with regards to integration as it can affect both the price and the match in a very high degree.

When writing a requirement regarding integration, specify what should be achieved by the integration and specify which product should initiate the communication and if data is going in or out of the system. If a description of the interface for the system exists, include it in the requirement.

It is also important to realize that even integration to well known systems can become a time consuming task. Many systems, especially frameworks like SAP, can be configured and used in many different ways – so chances are high that the integration needs to be custom developed even thou it is a standard system.

It should also be noted that data retrieved from a non-validated system will not become validated just because it enters a validated system.

Custom product vs. off-the-shelf product

When all the requirements are defined, it is time to find a match. It will likely involve talking with a lot of vendors that present their product – you might even send them your requirements so they can supply an answer for them.

If you encounter a situation where none of the vendors meet your requirements – carefully reconsider your requirements again based on the knowledge gained during the process. You might be in a unique situation where custom software is needed – but quite often it just means your requirements are too rigid or that you want to do too much in one system.

Also consider carefully if a vendor system is considered Category 4 or Category 5 according to GAMP5. This will affect the validation costs – both for the initial validation and for future updates and changes. Some well known QMS systems will tell you they are Category 4 – but the fact is that the templates for e.g. Change Control are custom coded – not configured. Which requires a completely different level of validation. So make sure to make your own assessment regarding this.

In-house development or external development

If you end up with a custom development project, or a project based on a framework that requires customization. It is likely that someone within the organization, probably someone from IT, will tell you that its much easier, cheaper and flexible to develop in-house. This is very rarely true. Primarily for the following three reasons:

  1. Development project require a huge amount of man-hours – are those kind of resources available in-house or would you need to hire a new team? If you need to hire a team, are they already experienced with GxP and are they even available in your location? Hiring 10 or 20 developers with a special skill-set can be difficult.
  2. They will probably suggest to implement it in SharePoint, SAP or another framework – is that system validated? You cannot have a validated system running within a non-validated environment – and are you interested in validating the corporate SAP installation along with all its future updates?
  3. Does you company have prior experience with this kind of development? If so, how did it go?

I am not saying it can’t be done internally – but more often than not, it fails or becomes incredibly expensive.

If you are looking at an external development project – make sure you have dedicated project management resources on your end to follow up on the development, handle contracts e.g. An external project can also go awry – but if your contracts are in order, then you are not the one paying for it.

Audit before selecting supplier – also internal audits

Whether developing internally or externally – make sure you do a thorough audit before settling on that vendor. At the very minimum, ensure that:

  • Their validation documentation actually lives up to your demands so you can base your validation on their validation. This will save you a lot of time and money.
  • They have proper SOPs both for development, release management, update management and change management.
  • They have proper and documented training of employees
  • That you have some kind of fail-safe if the company goes bankrupt or is acquired by a competitor.

I hope you enjoyed this article. It ended up being a lot longer than anticipated, but I guess that I just had a lot on my chest :)

Using Connect By in Oracle

When dealing with hierarchical data, Oracle has a Connect By clause that is very useful. I will try and demonstrate through examples in the following sections.

Hierarchical Data

In this example, our data will be structured in a tree-structure. Our example data will be a folder-structure.

 - Root Folder
    - Development
        - 3rd Party Software
        - Licenses
        - Release Docs
    - Sales
        - Clients
            - ChainsawsRus
            - Knitting Fever
        - Presentations

In a database, this folder structure could be represented by

TABLENAME: Folders
-------------------------------------------------|
| FolderID | ParentFolderID | FolderName         | 
-------------------------------------------------|
| 1        | 0              | Root Folder        |
| 2        | 1              | Development        |
| 3        | 2              | 3rd Party Software |
| 4        | 2              | Licenses           |
| 5        | 2              | Release Docs       |
| 6        | 1              | Sales              |
| 7        | 6              | Clients            |
| 8        | 7              | ChainsawsRus       |
| 9        | 7              | Knitting Fever     |
| 10       | 6              | Presentations      |
-------------------------------------------------|

This data example will serve as the primary example through the rest of the article.

Connect By

Let us just jump right into it and make a query on that data.

SELECT Level, F.* FROM Folders F 
START WITH ParentFolderID = 0 
CONNECT BY PRIOR FOLDERID = PARENTFOLDERID 
ORDER SIBLINGS BY FOLDERNAME DESC;

This returns the following result:

Oracle Connect By Example 01

Let us go through the SQL.

Level
The first line references the “Level” pseudo column. Whenever a hierarchical query is performed, this pseudo column as added to the result. The level column contains a number describing the level of the row in the query.

START WITH
The START WITH clause tells oracle how to define the root elements. If we had written:

SELECT Level, F.* FROM Folders F 
START WITH ParentFolderID = 7 
CONNECT BY PRIOR FOLDERID = PARENTFOLDERID 
ORDER SIBLINGS BY FOLDERNAME DESC;

The it would only return: Knitting Fever and ChainsawsRus

CONNECT BY and PRIOR
The CONNECT BY clause defines how to make the structure in the hierarchy. The PRIOR keyword, simply put, tells Oracle what the column should be compared to the parent in the hierarchy. If we instead had executed the following SQL:

SELECT Level, F.* FROM Folders F 
START WITH FolderID = 7 
CONNECT BY FOLDERID = PRIOR PARENTFOLDERID 
ORDER SIBLINGS BY FOLDERNAME DESC;

It would instead consider the root folder as the one with FolderID 7 (Clients) and because the PRIOR keyword was moved in front on PARENTFOLDERID instead, it reverses the query, giving us the following result:

Oracle Connect By Example 02

ORDER SIBLINGS BY
This tells Oracle how to order siblings on the same level. For the purpose of the example I made it descending as it can be seen from the result.

I hope this article has helped someone understand oracle hierarchical queries. For more information, consule the Oracle Database SQL Reference.

Using Ant to start and stop tomcat service

This short guide will describe how to start and stop Tomcat using Ant.

This was tested using Ant 1.9.6 and Tomcat 8.0.30 on a Windows 10 machine.

Requirements

Ant should be installed and the /bin folder should be added to the windows path. Tomcat should be installed and be running as a service in windows.

Ant build.xml

First we need to define a macro in Ant for calling a service

<property name="servicename" value="Tomcat8" />

<macrodef name="service">
    <attribute name="service" />
    <attribute name="action" />
    <sequential>
        <exec executable="cmd. exe">
            <arg line="/c net @{action} '@{service}'" />
        </exec>
    </sequential>
</macrodef>

Note: The call to the command prompt should of course be without a space before “exe” – but my host has mod_security installed which prevents me from saving if i remove the space. Haven’t figured out how to get around this yet…. The same problem occurs in the code below.

First the property servicename is defined setting the value to the name of the Tomcat service.
The “action” attribute can be either start or stop and the “service” attribute is the name of the service. The macro just calls “net start” or “net stop” from a command prompt.

The next thing we need to do, is to write a target for starting Tomcat and one for stopping Tomcat – and lets add restart for good measure.

<target name="startTomcat">           
    <service action="start" service="${servicename}" />		
    <sleep seconds="10"/>
</target>
	
<target name="stopTomcat">
    <service action="stop" service="${servicename}" />
    <sleep seconds="10" />
    <exec dir="." executable="cmd. exe">
    	<arg line ="/c taskkill /f /fi 'services eq ${servicename}' " />
    </exec>
    <sleep seconds="10" />
</target>

<target name="restartTomcat" depends="stopTomcat, startTomcat" />

The startTomcat target is straight-forward. I added a sleep of 10 seconds just to ensure that Tomcat has the needed time to start. Depending on your usage, this might be removed or increased to suit your needs – it is also possible to write minutes=”2″ for it to wait two minutes.

The stopTomcat target is a bit more complex. First I stop the service, then wait 10 seconds to give it enough time to shut down – then i kill the process in case it has not been shut down yet.

Hope this is useful to someone, took me a bit of fiddling to get working :)

Shortcodes in your wordpress theme

Shortcodes is a code reference that is inserted into either templates, pages or posts. When displaying content, wordpress replaces shortcodes with actual code.

A simple example would be a shortcode for inserting an adsense add.

Defining Shortcodes

In your functions.php file, add the following code:

function get_inline_ad($atts) {
    return '<span class="widget-title">Sponsored Link:</span>
	<br>
	<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
	<!-- Unit name from adsense -->
	<ins class="adsbygoogle" 
	    style="display:inline-block;width:468px;height:60px" 
	    data-ad-client="ca-pub-1234567890123456" 
	    data-ad-slot="1234567890">
	</ins>
	<script>
	    (adsbygoogle = window.adsbygoogle || []).push({});
	</script>';
}
add_shortcode('sc_adsense_inline', 'get_inline_ad');

A method named get_inline_ad is created that returns the google adsense code. You can go to http://google.com/adsense, create and setup an account and then create a new ad unit – from that you can get the adsense code to insert.

The last line is registering the shortcode ‘sc_adsense_inline’ and telling wordpress to replace it with whatever the method ‘get_inline_ad’ returns.

Using Shortcodes

Edit an existing post or add a new post, then either in the visual view or the text view, try and write [sc_adsense_inline]. It should then look like the ad below (remember to disable adblockers):

After saving, view the post. It should now display an ad in the place where you wrote the shortcode.

 

Git commands via ssh in a Windows Scheduled task

How to set up a scheduled task in windows that can run git commands and clone from a remote server using ssh instead of http. The examples in this article is using bitbucket.org as git repository.

Install Git

First download and install git. A windows version of git can be downloaded from https://git-scm.com/. During installation, choose the following options: Use Windows Prompt, use windows default console.

Git commands via ssh

Run the Git GUI program. Click help->show SSH key. If none is shown, generate a key and do not enter a passphrase. When the key has been generated, copy the public key to the clipboard.
Then go to bitbucket.org, click on your avatar and choose settings. Open the menu item named SSH Keys and click Add Key. Give the key a label and paste in the public key from the previous step – then save.The go to the repo you want to manage and copy the ssh-url. It can be found in the top right corner, you might need to select ssh from the dropdown instead of https. On your windows machine, open a command prompt and run the following command:

c:\> git ls-remote git@bitbucket.org:myaccount/myrepo.git

If it says git is an unknown command, add the git/cmd folder til the path environment variable and run the command again. You need to close the prompt first and open a new one for the updated environment variables to take effect.Answer yes to storing the key, then try to run the command again and verify that this time you are not prompted.

Create batch file

Create a file git.bat with the following content

git clone git@bitbucket.org:myaccount/myrepo.git > git.log 2>&1

This command will clone the repo and write output to git.log. The last section of the command pipes the error log into the standard log which is written to git.log.

Run the batch file and verify that it works as intended.

Setup scheduled task

Create a scheduled task in windows that run the batch file. Remember to specify the “Start In” setting to have control of which folder the batch is executed within and specify to run even when logged out.

Now the most important step:
Copy the folder c:\users\<youraccount>\.ssh to c:\users\default\.ssh

This is important since when NOT logged in when a scheduled task is run, it cannot find the ssh keys unless they are in the default profile folder.

 

 

CrossContext and RequestDispatcher in webapps

This article will describe how to use a crosscontext requestdispatcher to communicate between two servlets in different webapps.

Settings the example

Lets say you have two webapps loaded in tomcat:

  • CustomerPortal (loaded at http://myserver.com/CustomerPortal)
  • EmployeePortal (loaded at http://myserver.com/EmployeePortal)

In the CustomerPortal you want to be able to show an Employee Datasheet by calling:

http://myserver.com/EmployeePortal/employee?id=<employeeid>

and in the EmployeePortal you want to show a Customer Datasheet by calling:

http://myserver.com/CustomerPortal/customer?id=<customerid>

 

Allowing the communication with CrossContext

For each project, add a file named context.xml to the META-INF library that contains the following:

<Context crossContext="true" />

If your webapp already has a context.xml file, just add the crossContext parameter to the context-tag.

Now you have allowed the webapps to communicate. Next step is to do the actual communication.

Perform communication with RequestDispatcher

For doing the communication, we need a RequestDispatcher.

ServletContext sc = this.getServletContext().getContext("/CustomerPortal");
RequestDispatcher disp = s1.getRequestDispatcher("customer?id=" + customerId);               
                
try {
    disp.include(request, response);
}  
catch(Exception e) {
    e.printStackTrace();
}

The above code will call CustomerPortal/customer?id=xxx and include the code received in the current spot of the page. The example is easily changed for calling the EmployeePortal.

Handle the response

In case you want it to be a bit more flexible and handle the response, the following code can be used instead

ServletContext sc = this.getServletContext().getContext("/CustomerPortal");
RequestDispatcher disp = s1.getRequestDispatcher("customer?id=" + customerId); 
StringWriter sw = new StringWriter();
final PrintWriter pw = new PrintWriter(sw);
HttpServletResponse responseWrapper =
    new HttpServletResponseWrapper(response) {
        @Override
        public PrintWriter getWriter() throws IOException {
            return pw;
        }
    };
dispatcher.include(request, responseWrapper);
System.out.println("html from customerportal: " + sw.toString());

Priority Queue with a Twist

Recently I needed a variation of a Priority Queue in C# that supported concurrency.

My requirement for a Priority Queue

  • I would receive a lot of jobs in a random order
  • A job would contain a JobNumber indicating in which order they should be executed
  • The first JobNumber would be zero
  • All JobNumbers would be present (e.g. number 7 would not be missing)
  • Several worker threads would simultaneously be retrieving jobs from the queue, hence the need for concurrency

The Solution

For the clarity of the example, I have simplified the Job class to look like:

public class Job
{
	public int jobNumber;
}

My Concurrent Priority Queue would look like:

class ConPQueue
{
	private const int TIMEOUT = 10000; // 10 seconds
	private SortedList<int, Job> list;
	private int nextJobNumber;
	private static EventWaitHandle waitHandle = new AutoResetEvent(false);
	private Boolean completeAdding = false;

	public ConPQueue()
	{
		this.list = new SortedList<int, Job>();
		this.nextJobNumber = 0;
	}

	public Boolean add(Job job)
	{
		Boolean status = false;
		if (Monitor.TryEnter(this.list, TIMEOUT))
		{
			try
			{
				this.list.Add(job.jobNumber, job);
				// Notify the take() method that new values are present
				waitHandle.Set();
			}
			finally
			{
				Monitor.Exit(this.list);
			}
		}
		return status;
	}

	public Job take()
	{
		Job job = null;
		while (job == null)
		{
			// If next job is not present, wait until notified
			// that new jobs have been added to the list
			while (!hasKey(this.nextJobNumber))
			{
				waitHandle.WaitOne();
			}
			if (Monitor.TryEnter(this.list, TIMEOUT))
			{
				try
				{
					this.list.TryGetValue(this.nextJobNumber, out job);
					if (job != null)
					{
						// Only remove item on successfull retrieval
						this.list.Remove(this.nextJobNumber++);
					}
				}
				finally
				{
					Monitor.Exit(this.list);
				}
			}
		}
		return job;
	}

	public Boolean hasKey(int Key)
	{
		Boolean exists = false;
		if (Monitor.TryEnter(this.list, TIMEOUT))
		{
			try
			{
				exists = this.list.ContainsKey(Key);
			}
			finally
			{
				Monitor.Exit(this.list);
			}
		}
		return exists;
	}

	public void addingComplete()
	{
		completeAdding = true;
	}

	public Boolean isCompleted
	{
		get { return completeAdding && list.Count == 0; }
	}
}

To add a Job to the Queue:

queue.take();

To retrieve a Job from the Queue:

queue.add(myJob);

Hope this can inspire fellow developers :-)

Mimicking a Materialized View in Oracle

I had a quite complex View in an Oracle database that performed quite badly and I needed to improve performance. I tried making a Materialized View, but didn’t like the restrictions, so I decided to mimic the behavior using a table.

Materialized Views

I decided to look at Materialized Views, but soon realized that there were a lot of restrictions for the SQL that can be used to generate a Materialized View. A Materialized View is great for replication of data and simple queries, but for complex queries there were too many restrictions – so I decided to mimic the functionality of a Materialized View…and it turned out to be quite simple. I know it is not the best solution since it is vulnerable to coding errors, but besides that, I think it is quite elegant :-)

I had six tables (Table_A, Table_B….Table_F) and one view to create a summary – let us call it “View_ABCDEF”.

Improve Performance

To improve performance, I did the following:

I create a new table named “Mimick_ABCDEF” with the same structure as View_ABCDEF and then changed by code to search in Mimick_ABCDEF instead of View_ABCDEF.

I then updated all locations that performs UPDATE, INSERT or DELETE on Table_A through Table_F to run the following two queries after the original query:

DELETE FROM Mimick_ABCDEF WHERE (col1, col2, .... coln) IN 
(SELECT * FROM Mimick_ABCDEF MINUS SELECT * FROM View_ABCDEF);

This query removes all rows present in the table but not in the view (happens on update and delete). This query does not work when some on the values are null, but should be quite simple to fix.

INSERT INTO Mimick_ABCDEF (SELECT * FROM View_ABCDEF MINUS SELECT * FROM Mimick_ABCDEF);

This query inserts all rows in the view, but not in the table, into the table.

Quite simple :-)

 

Enigma Project released

Recently I have been working on a larger update for the Enigma program and the Enigma Builder – finally it is ready for release :-)

Enigma is a small Java program to execute a real world puzzle game. The users will be presented with some leads and based on those leads, they must find a code,

Click here for the English page or click here for the Danish page.