More 3D Printing and Fine Tuning

Hello all!

Lately, I’ve been working with my 3D Printer and I want to talk about some of the things I’ve been doing to get better prints from it.  In my previous post, I forgot to say what 3D Printer I actually have and if I’ve made any modifications to it.  I currently have the Monoprice Maker Select V2 printer.  I only have a singular mod on my printer, and that is a custom filament holder, so there is little to no effect on print quality by this mod.

My venture with trying to fine-tune my printer began when I purchased some new filament, specifically the Hatchbox Blue PLA.  This filament was a great choice because it is a very high-quality filament despite being only around $20 USD.  Before purchasing this I had been printing with the Monoprice Transparent PLA, but that filament had several issues where layers would poorly adhere to each other and it wouldn’t attach to the bed properly.  I’m unsure of why but the new filament has completely fixed this, my layers are now flawless except for some wobble from the printer moving fast.  I also haven’t had to use blue tape or glue on my bed at all since using this new filament.

After getting the new filament I felt a surge of adventure to experiment more with my slicer settings and try to make my prints even better.  For those who may not know, a slicer is a software that takes a file containing a 3D Object and slices it into layers of certain thickness and outputs this as a G-Code file.  This G-Code file is then loaded on the printer and controls what all of the axes and motors on the printer do.

Onto what I changed and experimented with.  The slicer I use is Cura and it’s made by Ultimaker, it’s a free slicer and in my experience works very well.  This is by no means meant to be a post about how to tune your printer, or how to use a slicer, this is just my experience that I find interesting and hope you do too.  I began my experimentation with changing my printing speeds.  While attempting to do complex prints I would get lots of artifacts and ghosting.  I realized that, if the printer is doing a complex print with many small parts, it’s going to shake a lot because I don’t have it braced and its frame is made out of sheet metal.  So in order to fix this, I turned the print speeds down from 60 mm/s to 35 mm/s, a drastic decrease but it worked very well.

The next major change I made with my slicer settings was to find the best flow rate for my extruder.  The flow rate is the amount of filament that the printer pushes out while printing a layer.  I found through some testing that my printer tends to underextrude filament, meaning it needs to push more.  I found that a good setting for my flow rate is around 110%-115%, but this depends on the print.

The final 2 major changes I made were with my temperature and my wall count.  I changed my printing temperature down to 200^\circC from my previous 210^\circC after I notice that the extruder was melting the filament below it over again and ruining prints.  So the Hatchbox Filament is definitely more susceptible to heat than the Monoprice Filament.  The final change I made was my wall count.  The wall count is quite literally the number of walls the printer makes, and with my 0.4 mm nozzle size I was originally using a wall count of 2 for a thickness of 0.8 mm, but this turned out to be extremely fragile in some cases so I bumped it up to 3 walls (Often referred to as perimeters) meaning I have a thickness of 1.2 mm.  This made my prints very durable compared to before and even made complex prints turn out better.

Overall these changes really upped my print quality, and I’m very happy that I can print complex models now.  The testing took a lot of trial and effort but really paid off in the end.  Learning about all of the different G-Code specifics was also a great experience.  And lastly I’ll leave you with the final fruit of my efforts:

A lattice cube torture test I printed.
A benchy test I printed.

Thanks for reading and have a wonderful day!
~ Corbin

An Introduction to Machine Learning Topics

Hello all!

So after my post last week, I received some feedback saying that I should better explain what the concepts that I was talking about are and why / how we use them. So in this post I’m going to attempt to explain most of the concepts I used in my last post.

To start off I’m just gonna break things down and list out the terms I’ll be defining.  In order to do machine learning you should usually have at least two sets of data, a learning set and a testing set of data.  Machine learning is also usually broken down into two main forms, these are supervised and unsupervised learning.  These then break out into the three common types of machine learning problems.  Underneath supervised learning we have classification and regression problems.  And underneath unsupervised learning we have clustering problems.  There’s a handy infographic I found to represent this:As SciKit Learn puts it “Machine learning is about learning some properties of a data set and then testing those properties against another data set.”  In this way, we can define our two data sets.  Our training set is the dataset we are training the computer to recognize data properties off of, and our testing set is what we are trying to predict or classify based on the properties we found.

Now we can move on to the two main types of machine learning, supervised and unsupervised learning.  Supervised learning is defined as a problem in which we feed the program some data as our training set, and that data has additional characteristics that we keep from it.  We then feed it that hidden data as our testing set, and task it with predicting the characteristics.

Underneath supervised learning, we have classification and regression.  Classification is when we feed the program a set of already labelled data, and use that as our training set.  We then feed the program some unlabelled data, and have it predict what that data is based off of our labelled training set.  In my previous post this is what I was doing with handwriting recognition.  Regression is feeding a set of data that has one or more continuous variables to the program, and having it predict the relationship between the variables and the results observed.  This task is a bit weird to envision but I find I can understand it better if I think of an example.  The one that makes the most sense to me is inputting a set of data with three salmon variables, length, age, and weight.  A regression problem using this data would be having the computer predict the length of a salmon based on its age and weight.

Unsupervised learning is defined as a problem in which our training set consists of an infinite amount of input values, but no corresponding target values.  This means our program will be finding common factors in the data reacting based on the absence or presence of them.  A common approach to this is clustering, in which you feed the computer a set of data, and it will separate this data into the common groups of data that share similar characteristics.

I hope this clarifies some of the things from my last post on classification that might be a bit unclear, and feel free to leave a comment if you would like any clarification or I made an error somewhere.

Thanks for reading and have a wonderful day!
~ Corbin


Diving into Machine Learning

Hello all!

So lately I’ve been messing with machine learning because I’ve always been interested in it and it’s just very cool and interesting to me.  I’d like to talk a bit about what I’ve been doing and struggling with and show some examples. I will be working with scikit learn for Python, and it comes with 3 datasets. Iris and Digits are for classification and Boston House Prices are for regression. Simply put classification is identifying something like a handwritten number as the correct number it is and regression is essentially finding a line of best fit for a dataset.  I still have a lot to learn about sklearn and machine learning in general, but I find it really interesting nonetheless and thought you guys would too.

So my code begins with the import of a bunch of libraries.  The only ones I use in my example here are sklearn and matplotlib, the others are simply either dependencies or libraries I plan to use in the future.

import sklearn
from sklearn import datasets
import numpy as np
import pandas as pd
import quandl
import matplotlib.pyplot as plt
from sklearn import svm

In this import, sklearn is the main library I’m using to fit my data and predict things, sklearn.datasets comes with the 3 base datasets Iris Digits and Boston Housing Prices.  I don’t know much about sklearn.svm, but I do know that it is the support vector machine which essentially separates our inputted data and runs our actual machine learning, so when we input testing data it can determine what number we have written. Numpy is a science / math library that adds support for larger multidimensional arrays and matrices. Pandas is a library for data analysis. Quandl is a financial library that lets me pull a lot of data that I can use for linear regression in the future. And matplotlib and it’s sub-library pyplot allow me to output the handwriting data.
So far my code for the recognition looks like this:

clf = svm.SVC(gamma=0.001, C=100)[:-1],[:-1])
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1],, interpolation='nearest')

Although my understanding is rudimentary, I can explain a little bit of what this does. Clf is our estimator which is the actual machine that is learning, and that is what we pass out training data through with lets us pass data into the svm that we made clf off of, and it trains our machine to know what the numbers should look like.  I am passing all digits except for the last one through this function, because we will be testing with the last one.  We then pass a digit through clf using clf.predict(),  which passes data for a know handwritten digit, 8, through clf.  Our object clf then outputs the text <code>array([8])</code> which means that it has predicted our inputted number as 8.  If we print out[-1:] we can see it and determine if it was correct. We do this using out 3 lines from matplotlib that create the figure, print it, and then show it. The figure we get is this: 

It’s a very low resolution, but it’s an 8! I think that this is brilliant, and I definitely need to learn more about what is happening here with my code. Machine learning is very cool and I definitely need to mess with it more and learn more.  So far I’m learning some of the basic elements like how to fit and predict things, how training and testing sets work, and a lot of the vocabulary that is used when talking about machine learning.  I can now actually talk about things like supervised and unsupervised learning, or classification and regression methods.  Along with this, I’m also learning more about other libraries like matplotlib, and how to write more pythonic (readable) code.  For anyone who wants to try this themselves, there’s a lot of really cool stuff online, but I’m using some of the resources from hangtwenty‘s GitHub repo dive-into-machine-learning.  It can be found here: Hopefully by my next post I will have created a basic understanding of linear regression and I can create some cool examples using it, and in my next post I will attempt to give my explanation on how fitting, predicting, and training actually works.

Thanks for reading and have a wonderful day!
~ Corbin