Docker Image - Minecraft Manufactio Docker Image

Hi all, last week we had a discussion on our discord server to play a minecraft modpack as group again and pretty much everyone knows - “this gets difficult to find one”. Due to the fact that in the
last few years everyone hosted a modpack for the whole discord there are multiple playthroughs done in many modpacks for minecraft. After a kinda long search we settled on Manufactio by
Golrith. Link to curseforge. From the looks of it - it is a pretty solid modpack but without any easy support for setting up a server.

So, welcome to my new blog post. Let’s create a docker image for this modpack specific.

Creating the Dockerfile

Everything starts with a Dockerfile in Docker. So lets get started.

Due to the fact that this modpack is only available for a pretty “old” minecraft version 1.12 it is easier to start from a jdk8, preferably a oracle-jdk one. Luckily binarybabel is providing older
jdk8 images via dockerhub, so we don’t need to setup our own in this case.

1
2
3
4
5
6
FROM binarybabel/oracle-jdk:8-debian
LABEL maintainer=deB4SH(https://github.com/deB4SH)
ENV ACCEPT_ORACLE_BCLA=true
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
COPY docker-entrypoint.sh /tmp/docker-entrypoint.sh

Beside the ACCEPT_ORACLE_BCLA that is required for using the oracle-jdk, there are static environment variables for LC_ALL and LANG set up. To set up a minecraft server these two are pretty much
optional and not required, but we want to check the server status while it’s active and running. For this we are going to rely on mcstatus which provides a
nice cli interface. At last import a docker-entrypoint.sh script to define what should happen after the image is started as container.

Next step should be to install some required packages. As already mentioned we want to use mcstatus to monitor the minecraft instance. Beside that… unzip is required to unpack the modpack.

1
2
3
COPY apt/source.list /etc/apt/sources.list
RUN apt update && apt install unzip && apt install python3 python3-pip -y
RUN python3 -m pip install mcstatus

After these steps there is the big part still open. Setting up the forge and manufactio inside the image.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
RUN mkdir /var/manufactio
RUN mkdir /var/manufactioconfig
COPY forgeserver/forge-1.12.2-14.23.5.2855-installer.jar /var/manufactio
WORKDIR /var/manufactio
RUN java -jar forge-1.12.2-14.23.5.2855-installer.jar --installServer
#COPY Mods
COPY manufactio.zip /var/manufactio.zip
WORKDIR /var
RUN unzip manufactio.zip
RUN rm -rf manufactio.zip
WORKDIR /var/manufactio
#Link specific files to different folder for easier docker-setups
RUN mv /var/manufactio/banned-ips.json /var/manufactioconfig && \
mv /var/manufactio/ops.json /var/manufactioconfig && \
mv /var/manufactio/eula.txt /var/manufactioconfig && \
mv /var/manufactio/server.properties /var/manufactioconfig && \
mv /var/manufactio/options.txt /var/manufactioconfig
RUN ln -s /var/manufactioconfig/banned-ips.json /var/manufactio/banned-ips.json && \
ln -s /var/manufactioconfig/ops.json /var/manufactio/ops.json && \
ln -s /var/manufactioconfig/eula.txt /var/manufactio/eula.txt && \
ln -s /var/manufactioconfig/server.properties /var/manufactio/server.properties && \
ln -s /var/manufactioconfig/options.txt /var/manufactio/options.txt

The steps are self explanatory. First there we need to set up two directories inside the image. The folder /var/manufactio provides everything. Starting from the forge server installation up to the
manufactio mods that are used. After that we need to copy the installer and ofcorse install the server itself inside the image. If you are reproducing this image based on this guide - an additional
step could be to remove the installer after the installation process. It is not required to be available after that. As next step we need to copy all mods into the image. To achieve that there is a
.zip available inside this git, which provides everything required. The zip is based on the downloadable client from curseforge with some parts stripped away. We do not need every client mod on our
server to get the mod running.

Last step in this block is moving and symbolic linking copnfiguration files into a seperate folder. This eases up the usage inside a kubernetes environment where configurations may be provided as
configmap and mounted into the container. These steps are fully optional. If your usecase is just hosting via docker-compose you could also easyly mount files directly
with ${PWD}/config/banned-ips.json:/var/manufactio/banned-ips.json.

To round things up we need to add an Entrypoint to the docker image. In our case the docker-entrypoint that got copied into the image.

1
2
#RUN SERVER
ENTRYPOINT ["/bin/bash", "/tmp/docker-entrypoint.sh"]

After that, the image is pretty much done. We could build this image with docker, tag it, use it to host our own manufactio server, but in this guide we are going a bit deeper into the rabbit hole and
start building up a cicd infrastructure.

Dockerfile

The full Dockerfile is available inside the github repository found
here: https://github.com/deB4SH/Docker-Manufactio/blob/master/src/docker/Dockerfile

CI CD

For setting up an automated build we need to start with thinking about - “how we want to build the image, how to tag, how to deploy somewhere”. This example uses the following stack:

  • Maven
    • structured aproach for defining variables and components for each build
  • Maven Docker Fabric8 Plugin
    • awesome plugin to build, tag, deploy images with maven
  • Jenkins

In regard of maven - The scope of this tutorials is primarly on the dockerfile and build, deployment process. Describing the whole maven build-cycle is a bit out of scope for this.
The pom.xml describes the whole build, if you are firm in maven. If desired I’m going to write an another post with this in
focus. :)
After removing maven of the scope lets get a deeper look into the Jenkinsfile that provides everything to instruct my homelab
jenkins for building an deploying the image.

Jenkinsfile

My homelab jenkins is set up with the Kubernetes Plugin that provides an easy interface to allocate dynamic agents inside my homelab for builds. Due to the
fact that we are going to build a docker image inside Jenkins we are going to need a Docker-In-Docker, in short dind, image. There are multiple available
on dockerhub. Some also provide maven out of the box. Inside this guide: my dind image that provides maven and a jdk is used. Found
here: (https://github.com/deB4SH/Docker-Maven-Dind)[https://github.com/deB4SH/Docker-Maven-Dind]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: ghcr.io/deb4sh/docker-maven-dind:3.8.2-jdk-11-17.12.0
command:
- sleep
args:
- 99999
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
'''
defaultContainer 'maven'
}
}

Jenkins is going to provision a maven container alongside the jnlp-container that is required by the jenkins for communication. To keep the container running we are going to let it sleep for a long
time.

Next up, we need to define the stages to build and push the image. This is also possible in one stage block. If desired everything could be merged into one.

As personal sidenote: splitting tasks allows for structured control and decisions when to do certain tasks. e.g. we don’t need to push every build in a multibranch pipeline, but want to build all
branches to check if there are any issues

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//stages to build and deploy
stages {
stage ('check: prepare') {
steps {
sh '''
mvn -version
export MAVEN_OPTS="-Xmx1024m"
'''
}
}
stage('build image') {
steps {
sh 'mvn clean install -f pom.xml'
}
}
stage('push image') {
when {
branch 'master'
}
steps {
withCredentials([usernamePassword(credentialsId: 'docker-push-token', passwordVariable: 'pass', usernameVariable: 'user')]) {
sh 'docker login ghcr.io -u $user -p $pass'
sh 'mvn docker:push -f pom.xml'
}
}
}
}

The first stage checks if maven is available in any version and sets an environment variable for MAVEN_OPTS. In this specific case: increasing the max ram amount for the build. This is optional and
could be removed. Second stage provides all steps required to build the image with maven. Last but not least, the third stages executes a docker login onto the github container registry to push to
image towards and also the command to push the image afterwards.

If everything works out in your Jenkins you should be greeted with a nice stage view after some runs.

Conclusion

After implementing all parts we are able to build an image, deploy it and also tag it. The image should be available over github in your container registry or in your local docker-engine for local
usage only. This image also works in a kubernetes environment where configuration-files are stored inside a configmap that get mounted into the running container.

Happy mining!

Homelab Stories - Deploy your own Instance of Antennas in your Homelab

Hi all,

Watching linear tv-programs is annoying, or? But sometimes some good talk shows or series are airing over the “old” tv. A general purpose service to stream iptv content into your local network is
tvheaded. Sadly some awesome projects like jellyfin or plex are not able to catch the streams directly from tvheaded and require a HDHomeRun api. Antennas serves as proxy between the media systems and
tvheaded as api-gateway.

Motivation

As viewer want to watch and record tv shows directly without any requirement of letting my computer run or record it via vlc.
As viewer I want to watch recorded shows somewhere, without being required to some magic like calling through vlc some kind of volume on my NAS.
Also as viewer would like to use my media system (jellyfin) to stream the dvr content of, due to easy usage through any device here.
Jellyfin is available on android and firetv, so it should be easy to provide it on any device inside my household.

Currently there is a TVheaded available that maps the iptv service provided by the telekom to all local devices.

From the looks of it: TVheaded as inbound, Antennas as API-Gateway, Jellyfin as Media-System for all devices for easier access

Solution

To get antennas running in our cluster we are required to provide serveral manifest files that contain crucial parts. I am using as namespace media in this case, but you are free to change that to whatever you desire.
It is a good practice to create for each individual application a namespace, but also to group them by topic. Feel free to change it to your desire.

Lets start with the deployment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: apps/v1
kind: Deployment

metadata:
name: antennas
namespace: media

spec:
replicas: 1
selector:
matchLabels:
app: antennas
template:
metadata:
labels:
app: antennas
name: antennas
spec:
containers:
- image: thejf/antennas:latest
imagePullPolicy: IfNotPresent
name: antennas
ports:
- containerPort: 5004
name: http
protocol: TCP
envFrom:
- configMapRef:
name: configmap-antennas
resources:
limits:
cpu: 250m
memory: 100M
requests:
cpu: 50m
memory: 30M

I know using latest as tag is a bad design by default, but sadly the author thejf tagged the latest image only with latest not something else. Beside this there are multiple other and older
tags available, which may or may not be, working. Also keep in mind: the current image is not available for armv8, armv7 or armhf. If you have set up a mixed cluster please add the following block
infront of your container defintion.

1
2
3
4
5
6
7
8
9
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64

Beside that: The deployment is pretty much straight forward. Antennas is not really ressource-hungry and is quite “
overpowered” with 250m cpu and 100m memory. The interesting part is the configmap which contains the url for antennas and the connectionstrings for tvheaded. Keep in mind, if your tvheaded is secured
with credentials you are displaying them here. In that case I would suggest moving them into a secret or even a keystore like Vault.

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: ConfigMap

metadata:
name: configmap-antennas
namespace: media

data:
ANTENNAS_URL: "http://192.168.1.102:5004"
TVHEADEND_URL: "http://service-tvheaded-clusterip.media:9981"
TUNER_COUNT: "2"

Antennas itself is providing its api under an ip address in my local area network. Due to the fact that mac-vlan is pretty difficult to provide in kubernetes I went with
MetalLB (https://metallb.universe.tf/) which provides a Layer2 loadbalancer for this kind of service. There is also a story available for setting up metallb in your cluster. Check out my post Homelab Kubernetes Stories - Deploy METALLB in your homemlab for further information.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kind: Service
apiVersion: v1
metadata:
name: service-antennas
spec:
ports:
- name: http-antennas
protocol: TCP
port: 5004
targetPort: 5004 #container port
selector:
app: antennas
externalTrafficPolicy: Local
loadBalancerIP: 192.168.1.102
type: LoadBalancer

Due to the functionality of kubernetes to access services and dns names clusterwide over the combination of servicename.namespace it is pretty easy to access your tvheaded instance if it is running the same cluster.
To access the tvheaded instance I am directly using the service of tvheaded, so traffic is not required to run “externally” over a local address in my network.

If your media system is also running in the same cluster, you could also switch the LoadBalancer against a ClusterIP service or keep them running simultaneously in your infra.

1
2
3
4
5
6
7
8
9
10
11
12
13
kind: Service
apiVersion: v1
metadata:
name: service-antennas-clusterip
spec:
type: ClusterIP
ports:
- name: http-antennas
protocol: TCP
port: 5004
targetPort: 5004 #container port
selector:
app: antennas

After setting up the three or four files, it is easiest to hook them together in a single kustomization.yaml and use it as aggregation place for simple deployments.

1
2
3
4
5
6
7
8
9
10
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: media

resources:
- base/deployment.yaml
- base/service-lb.yaml
- base/service-clusterip.yaml
- base/configmap.yaml

With that in place it is possible to simply call k apply -k ./ to deploy your newly created deployment in your kubernetes cluster inside the namespace media.

Conclusion

We are now pretty much set up to stream any available iptv content from tvheaded via a hdhomerun api. Due to the clusterip it is possible to use that directly in jellyfin via http://service-antennas-clusterip.media:5004/. “No external traffic required”
If available it is possible to grab XMLtv directly off tvheaded too with http://service-tvheaded-clusterip.media:9981/xmltv/channels to have some kind of tv program available.

With this setup you should be good to go and stream away.

FAQ

If there are any questions - feel free to reach out via twitter or reddit

Why no SSL?

I know. SSL everything. Due to the fact that this is only local traffic, I do not want to setup the ssl required “overhead” like cert-manager, ingress to provide the traffic secured, make everything available with my self-signed root-ca.

DevOpsStory - Keep your local maven repository clear

Hi there!

Due to my current project I’m heavly in contact with an CI/CD infrastructure provided by an client. There is no downside with that, but it generates a massive security and build issue if something like the local maven-repository is shared between multiple build-nodes. The shared local repository opens up a lot of attack vectors for applications build on that same node, it also generates noise if someone tries to disturb your build.

Example Case
Someone with malicious motives could install through mvn install a broken or wrong jar instead of the correct dependency you are expecting to receive.
This is easyly done through

1
mvn install:install-file –Dfile=my_dummy_app.jar -DgroupId=randomGroup.tld -DartifactId=awesomeArtifact -Dversion=1.0.0

Once the jar is in your local maven repository in place, maven doesn’t redownload it in first place. It expects that you’re doing the right thing.
Sooo= Clearing the maven cache is one way to be ahead of this.

Solution

Clearing the maven cache is a pretty easy task todo. There are two easy ways to remove it from your local environment or ci/cd environment.

Remove the cache

Due to its file-based nature of the maven repository cache you can simply remove relevant data from it and you are good to go. In most cases the maven cache is configured by default in a .m2 directory in your home or the executing user.

Windows: C:\Users\YOUR_USERNAME.m2
Linux: /home/YOUR_USERNAME/.m2

Keep in mind that dot-directories are hidden by default in many linux-derivates. After you’ve located your .m2 simply execute a remove command on it and the whole cache should be removed.

1
rm -rf .m2/

Inside a CI-environment with shared cache this may hurt other build-tasks, so a more lacy approach is needed. Simply head down the .m2 directory into the repository directory and remove components with caution.

If you’re using clean install it is a good task to clean up your build-artifacts after deploying them into an artifact directory.

Purge the cache through maven

An even simpler approach is purging the maven cache with maven. The dependency plugin provides a purge-local-repository function https://maven.apache.org/plugins/maven-dependency-plugin/examples/purging-local-repository.html

1
mvn dependency:purge-local-repository

Within the default setup of the dependency plugin it purges everything including transitive dependencies of your application. This would result in a complete redownload of all artifacts. With the parameter actTransitively this behaviour is deactivatable.

1
mvn dependency:purge-local-repository -DactTransitively=false

After taking a look into two approaches how to remove artifacts from the local cache. How about take a look into changing the cache dir into something temporary?

Solution Number Two

Most of our CI/CD infrastructures are build on top of containers (eg. docker container). An another approach would be redirecting your local maven repository inside the container. This wouldnt resolve the issue when working with ssh-workers/nodes that are persistent.

1
mvn -Dmaven.repo.local=/tmp/mvn_repo clean install

The maven parameter maven.repo.local allows you to redirect the cache for the current maven call.

Conclusion

In short, we looked at three approaches to tackel the issue with shared maven cache repositories in your environments.
Based on my experience I often tend towards Solution Number Two, while writing down build steps. It fits most projects best and the overhead in traffic is often compensated through a local maven-mirror.

Building a Maven Task Builder for Bamboo

When working with the Atlassian Bamboo CI Server it gets pretty quick annoying to setup a nice and readable build pipeline. Within my current project we’re having multiple consecutive maven tasks that build, test, deploy parts of the application.

Source

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
console.log("DEBUG-MSG: Runtime-Extender start");
var intervalholder = null;
intervalholder = setInterval(function(){
if(Object.keys(openerp.instances).length > 0){
console.log("Found openerp.instance, load your plugins");
openerpInstance = openerp.instances.instance0;
//load here
openerp.yourextension(openerpInstance);
clearInterval(intervalholder);
intervalholder = null;
}
}, 1000) ;
openerp.yourextension = function(instance){
var module = instance.pointofsale;
//code here
}

nice and simple, mh? ;)

Bamboo CI Server Environment Variable Inject

Hi there!

Due to a new project at work I got into a deep dive with the bamboo build server from atlassian. In around 4 months working as devops and ops engineer for the new project with bamboo, I came to the point that bamboo still needs a lot of work to be competetive with jenkins or something like gitlab-ci or github-ci.

While developing the pipeline for releases, an issue came up that is quite difficult to resolve via bamboo. The issue was: exporting the version number of an branch and setting the maven version number aftwards based on that extracted number. Within jenkins I would extract the version number inside the relevant stage and inject it into the environment.
A snippet to do this task could look like the following

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
stage('setVerion: release-branch') {
when {
branch 'release/*'
}
environment {
BRANCHVERSION = sh(
script: "echo ${env.BRANCH_NAME} | sed -E 's/release\\/([0-9a-zA-Z.\\-]+)/\\1/'",
returnStdout: true
).trim()
}
steps {
echo 'Setting release version'
echo "${BRANCHVERSION}"
sh 'mvn versions:set -DnewVersion=${BRANCHVERSION} -f ./pom.xml'
}
}

Within the world of bamboo thats not that quite easy.

Solution

To achieve the same we need to split up the simple step into 4 parts.

First we need a script to extract the version number and prepare it in some kind of file to inject it afterwards into the build context. Lets take a look into the following script example.

1
2
3
4
5
6
7
8
9
10
11
12
#!/usr/bin/env sh
currentBranch=$(echo $bamboo_planRepository_default_branchName)
if [[ $currentBranch == *"release/"* ]]; then
export versionNumber=$(echo $currentBranch | sed -E 's/release\/([0-9a-zA-Z.\\-]+)/\1/')
else
export versionNumber=$(echo "0.0.0-SNAPSHOT")
fi
echo "CONTAINER-VERSIONNUMBER: " $versionNumber
echo "Preparing versionnumber to inject into bamboo"
VALUE="versionNumber=$versionNumber"
echo $VALUE >> .version.cfg
echo "Created .version.cfg under the root directory"

Within this script we’re using the provided bamboo variable for branchnames inside a multibranch pipeline. Extracting the version number if this is a release branch, else we’re setting 0.0.0-SNAPSHOT as version number.
After extracting the versionnumber or setting a development number we are storing the information inside a cfg file that serves as temporary data storage to import data from.
With the following step we are going to import the data into the build context.

1
2
3
4
5
6
private Task exportBranchVersionNumber() {
return new ScriptTask()
.description("Exports the Branch-Version into an environment variable")
.interpreterBinSh()
.inlineBody("./version.sh");
}
1
2
3
4
5
6
private InjectVariablesTask injectVersionnumberIntoBamboo(){
return new InjectVariablesTask()
.path(".version.cfg")
.namespace("inject")
.scope(InjectVariablesScope.LOCAL);
}

After injecting everything into the local scope of the current build context of the bamboo build server you easly can use the variables again over the reference schema. Just keep in mind that you need to add bamboo.inject infront to aquire the correct information.

The next step just shows how to use this in a real world example.

1
2
3
4
5
6
7
8
9
private MavenTask changeVersion(){
return new MavenTask()
.description("Updates and changes the version number for all containers")
.goal("versions:set -DnewVersion=${bamboo.inject.versionNumber} -f pom.xml -s settings.xml versions:update-child-modules")
.hasTests(false)
.version3()
.jdk("JDK 1.8")
.executableLabel("Maven-3.3.9");
}

Conclusion

Bamboo needs a lot of scripting magic to get “well-known” pipeline steps running. Currently I am refining even more components of the global build file. Stay tuned for more bamboo guide in future.

Download a web-index recursivly

Hey there, my professor keeps all its data in a seperate webstorage. The easiest way to get the files is to view files over the webbrowser at a specific domain. As a lazy person who dont wants to download all lecture files its a way easier to download it via wget in a resursive usage. For not downloading the parent folders you need to ignore them just with a secound parameter. The whole webstorage is secured with an simple httpauth

wget -r –no-parent –http-user=USERNAME –http-password=PASSWORD URL

Thats it :)

Exclude Robots from etherpad lite

Etherpad lite is easy to extend via npm install ep. While extending etherpad you need to keep in mind that there are some plugins which create new public sites that are crawable by google, bing and co. As example a beloved plugin by me (https://github.com/JohnMcLear/eplist_pads). It creates nice lists of all your pads, but it creates public searchable ids under /list and /public. To fix that is pretty easy. You need just to edit your robots.txt file under /static/robots.txt. You can find it under etherpad-light/src/static. Just add these two lines at the bottom of the file.

Disallow: /list Disallow: /public

Extending the OpenERP POS Module

I am currently active on developing and extending the openerp point of sale extension. Didn’t get anything of mine extensions into it.. The problem? Openerp initialises all Javascript-Code after rendering the whole page and its running the user-generated code befor running its own code. I found a very small and smart way to work around this problem with a recursive timeout caller. The script is nice as easy to understand. :)

Source

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
console.log("DEBUG-MSG: Runtime-Extender start");
var intervalholder = null;
intervalholder = setInterval(function(){
if(Object.keys(openerp.instances).length > 0){
console.log("Found openerp.instance, load your plugins");
openerpInstance = openerp.instances.instance0;
//load here
openerp.yourextension(openerpInstance);
clearInterval(intervalholder);
intervalholder = null;
}
}, 1000) ;
openerp.yourextension = function(instance){
var module = instance.pointofsale;
//code here
}

nice and simple, mh? ;)

Issues with Cubietruck / Cubieboard 3

Hey everyone, I am currently setting up my cubietruck to get my personal cloud running with owncloud and services like firefox sync to backup my browsing history. I ran into some stupid things caused by ubuntu.com behaviour with hosting sources for old distributions on ports.ubunut.com. Linaro for Cubietruck runs from start with Quantal Quetzal Ubuntu, what is fine. After setting up the system , my usual behavior is, to get the lasted updates and fixes but nothing happend just 404 errors from ubuntu.com. After checking things like connection error on IPv6 and other stupid things, I check back if there is still something up for quantal. Nope, nothing in there… What now? You just need to update your /etc/apt/source.list - I uploaded my new one to 2 hosts. phcn.de (http://paste.phcn.de/?i=1409570170) w8l.org (http://paste.w8l.org/kt82xg4erqo9)

After updating your souces just type into the console apt-get update apt-get -y dist-upgrade

The system-ugrade on cubietruck may take a while - for me it was around an half hour to an hour.

Hope this could help some new-commers :)

Update: 09.01.2021

Seems that both paste-services removed the past entries. Therefor this log entry is just for history reasons still available.

Pseudocode for some basic Algorithms

Hello everyone, I am currently in my final learning phase for exams and stumbled about the topic, writing pseudocode for simple algorithms like selectsort,insertsort or bubblesort. I though about an hour of all three to get a nice and clean version done. I know there are “ready to use”-stuff on Wikipedia or other bulletin -boards… but to get an own version is somehow cool :)

SelectSort

1
2
3
4
5
6
7
8
9
10
11
12
13
selectSort(Array a){
i = 0
l = a.length()
while(i <length){
min = i
for(j = i+1; j <= n; j++){
if(a[j] < a[min]){
min = j
}
}
a.switch(i,min)
}
}

InsertSort

1
2
3
4
5
6
7
8
9
10
11
12
13
insertSort(Array a){
n = a.length()
i = 0
while(i<n){
for(j=n-1; j>0; j--){
if(a[j-1] > a[j]){
val = a[j]
a[j] = a[j-1]
a[j-1] = val
}
}
i++
}

BubbleSort

1
2
3
4
5
6
7
8
9
10
bubbleSort(Array a){
n = a.length()
while(n>1){
for(i=0; i < n-1; i++){
if(a[i] > a[i+1]){
a.switch(i,i+1)
}
}
n--
}