Blog: I'm No Statistician, but... let's explore Design of Experiments training
Our 2-day Effective DoE Implementation workshop is a popular choice for clients - but what exactly does this course entail?Written by Chris Challis, Business Director.
Make no mistake - I’m no statistician. I’m no scientist either. My engineering skills are non-existent and my formulation knowledge has yet to form. And yet, I work as a Business Director for a statistical solutions company. A career path gone awry? Gosh, no.
It’s my job at Prism Training & Consultancy to help set up, set down, and facilitate statistical consultancy, training and software work enquiries, alongside handling any questions or queries about the services we offer, and identifying your needs to ensure the right Prism team member is on-hand to help assist you. Consequently, I've been immersed in a whole new world of terminology; from split plots to design space, multivariate analysis to t-tests. The purpose of this on-going blog is for me to share with you what I've been learning, as a 100% newbie, so those who may also be new to the heady world of statistical methodology and the various tools at your general disposal can get to grips with the very foundation of what we do here at Prism.
One thing I hear regularly from clients interested in our training workshops is “My team know a little about Design of Experiments, but not much! What course would you recommend?” Whilst we offer a range of advanced courses and more in-depth statistical training, our most popular workshop by far is our Effective DoE Implementation course.
Given I’m writing a blog titled ‘I’m No Statistician, but…’, it’s safe to say I fall very squarely into the beginner camp. Sure, I understand the concept of Design of Experiments (or DoE) but with no statistical or scientific background to speak of, how do these concepts and ideas actually work in real-life situations? I signed up to find out.
The aims of the Effective DoE Implementation course are simple; it seeks to provide attendees with the motivation to design and analyse experiments strategically, allowing them to collect information-rich data and – in-turn – sequentially build up their process knowledge. A result of this is that attendees will end the workshop able to confidently make decisions from their experiments and transfer their knowledge to others, whilst also improving productivity and efficiency of processes and fit-for-purpose work packages.
This is all very well, but how does the workshop achieve this?
What’s obvious from the get-go is that the 2-day course is an incredibly interactive one, with plenty of time spent hands-on with a specific software package (the workshop I attended used Design-Expert, although we do also deliver this training using SAS JMP). Throughout the course, we’re asked to build various experimental designs and tackle process challenges; however, rather than analysing pre-existing datasets, it makes for a much more interesting discussion if we’re generating and analysing our own data. As such, within minutes of the workshop starting we’re simulating experiments via ProSim - one of many tools we've developed in-house specifically for use during our training workshops. This particular tool allows attendees to simulate experimental results, based on various real-world scenarios built into it.
From there, we’re soon exploring experimental strategies and delving into the world of full factorials and fractional factorial designs. It’s a genuinely uplifting experience when – having faced an unnerving number of factors (and their interactions), levels and response variables – you’re able to test effects, detect curvature, understand aliasing and dabble with screening designs confidently in the space of one day.
With the basics under control, the second day builds upon this and explores Design of Experiments from a sequential approach. How do we augment a design for de-aliasing and optimisation? How do we scope and screen designs to investigate key process parameters? How do we calculate a predicted design space? Once again, each topic or concept is aligned with a practical for attendees to work through – using the software package to highlight the various different ways problems can be solved, data can be analysed and key information disclosed.
One moment I found particularly useful was the workshop attendees partnering up to tackle a particular analysis, using different level Resolution designs to explore the associated risks, costs and benefits of certain number of runs. With discussion encouraged, feeding back our results, spotting our errors and questioning our decisions with other participants gave an interesting insight into the best practise of resource management.
The results? Well, I’m no statistician, but… at least I do now know the difference between OFAT and Factorial Design, the importance of DoE as part of a Quality by Design framework and the value of scoping designs! Statistical terms such as ‘degrees of freedom’, p-values and the signal-to-noise ratio now make sense. Most importantly, I feel equipped to create my own designs and analyse their results… just don’t get me started on 3FIs!
Be the first to know about new blogs, upcoming courses, events, news and offers by joining our mailing list here.