Skip to content

Simple local/remote framework to relay computational work like openscad or slicing to remote servers, and retrieve the results back locally.

Notifications You must be signed in to change notification settings

Spiritdude/RepRapCloud

Repository files navigation

RepRapCloud

Version: 0.019 (ALPHA)

RepRapCloud (rrcloud) is a small but powerful perl-script which provides an easy backend framework to relay computational work remote among many servers and retrieve the results locally; both synchronous (returns when done) and asynchronous (returns immediately telling you the state of task 'busy', 'complete' or 'failed').

% openscad.cloud test.scad -otest.stl
% openjscad.cloud test.jscad -otest.stl
% slic3r.cloud --load=prusa.conf huge.stl --output=huge.gcode
% printrun.cloud /dev/ttyUSB3 huge.gcode

which uses myserver.local (as defined in rrcloudrc) and starts there to do the work (openjscad, slicing etc) on the particular server, and returns when the task is done (synchronous).

% rrcloud --s=myserver.local openscad test.scad
id: 1361982308-837500
% rrcloud --s=myserver.local openjscad test.jscad
id: 1361982310-219223
% rrcloud '--notifier=http://someserver.local/ping-$id' --s=myserver.local slic3r --load=prusa.conf huge.stl
id: 1361982318-371735
% rrcloud '--notifier=http://$myip/done?$id' --s=myserver.local printrun /dev/ttyUSB3 huge.gcode
id: 1361982322-198887

does nearly the same, except it returns right away (asynchronous), and if you call rrcloud info id and see if the job is 'completed' (or 'failed'), the result is found at tasks/out/id[.ext], depending on service proper extension is set.

The --notifier takes an URL, which is called once the server finished with the task, $id will be replaced with the task id, and $myip with the ip of the client requesting the task on the server (linking back).

Note: This is ALPHA software, no thorough security code-review has happened yet, so use it solely in a trusted (local) network.

Requirements

What Works

  • openscad (single file input/output), e.g. openscad.cloud huge.scad -ohuge.stl (OpenSCAD)
  • openjscad (single file input/output with support of OpenSCAD.js), e.g. openjscad.cloud huge.jscad -ohuge.stl (OpenJSCAD.org)
  • slic3r, e.g. slic3r.cloud --load=my.conf huge.stl --output=huge.gcode
  • printrun e.g. printrun.cloud /dev/ttyUSB3 huge.gcode

  • not yet but planned:
    • multiple input files not referenced by arguments (e.g. huge.scad including aa.scad) - likely by support of directory upload (not yet sure)
    • multi-stage open[j]scad -> slic3r -> printrun
    • fine-grained progress indicator
    • suspend/resume/kill of jobs, in particular useful for printrun service

History

  • 2013/05/20: 0.019: better XHR support Access-Control-Allow-Origin: * in header
  • 2013/03/24: 0.018: integrating http:// notifier/callback for async requests
  • 2013/03/07: 0.017: increased JSON support through all operations
  • 2013/03/05: 0.016: native arguments (switches and variables) supported, printrun service added (via Printrun:printcore.py)
  • 2013/03/04: 0.015: preparing general interface for several dbs (mongodb, mysql, flat-file (default))
  • 2013/03/03: 0.014: logging, and some code clean-up
  • 2013/03/02: 0.013: checking preargN for validity
  • 2013/03/02: 0.012: openjscad service included
  • 2013/02/25: 0.011: rrcloudrc at various places considered, --local force local
  • 2013/02/24: 0.009: replaced `` by fork & exec combo, a bit code cleaning up
  • 2013/02/24: 0.008: additional prearguments (e.g. --load=file.conf as for slic3r)
  • 2013/02/23: 0.007: directory support as input (experimental, disabled)
  • 2013/02/22: 0.005: multiple input files supported, added 'echo' service
  • 2013/02/19: 0.002: remote stuff slowly working, not yet complete
  • 2013/02/18: 0.001: first version, simple services of openscad, slic3r working

Install

% cpan Time:HiRes
% make install

Permissions

Be aware that rrcloud is a command-line program (CLI) and a CGI in one, the CLI is executed under your login, whereas the CGI is executed as user www-data or so (depends on your UNIX flavour). rrcloud (and *.cloud) create

  • tasks/
    • in/
    • out/
    • log/
    • info/
under that identity, after that if you mix CLI and CGI it may cause premission problems, e.g. www-data not having the permission to write files under directories created under your user identity.

Solution

Uniform Use

Do not mix CLI and CGI, e.g. use rrcloud and *.cloud as CLI on a local machine, and on a server only used it to receive requests via CGI but not operated via CLI.

Mixed Use

Make user www-data part of your group (/etc/group), so user www-data can write into directories created by you (your login) - this way you can use the mixed operation.

Note: do not use rrcloud on itself, e.g. call rrcloud --s=localhost info and which calls the same local rrcloud, it will mix up state of the tasks and fail to deliver accurate results.

Usage: Command Line

rrcloud is a hybrid of CLI and CGI as mentioned, so it can be used on the command-line or web, as client or server:

Local

% ./openscad.cloud tests/cube.scad -otests/cube.stl
% ./slicer.cloud tests/cube.stl --output=tests/cube.gcode

Note: *.cloud are just symbolical links (sym-links) to rrcloud, depending on the name rrcloud behaves accordingly.

Remote

Edit rrcloudrc in the same directory (or ~/.rrcloudrc):

servers = server.local,server2.local      # , separated list
slic3r.servers = server.local             # server(s) for slic3r.cloud only
openscad.servers = server2.local          # server(s) for openscad.cloud only
printrun.servers = raspberrypi.local      # server(s) for printrun.cloud only

then

% ./openscad.cloud tests/cube.scad -otests/cube.stl
% ./slicer.cloud tests/cube.stl --output=tests/cube.gcode

Usage: Web

index.cgi is just a sym-link to rrcloud, you can access the servers remotely via http://server.local:4468 when you configured your Apache HTTPD or Lighttpd accordingly (document root to RepRapCloud/).

Apache HTTPD

Add in /etc/apache2/ports.conf:

NameVirtualHost *:4468
Listen 80
Listen 4468

and create /etc/apache2/sites-available/rrcloud

<VirtualHost *:4468>
   ServerAdmin webmaster@localhost
   
   DocumentRoot /path/to/RepRapCloud
   <Directory />
      Options Indexes FollowSymLinks ExecCGI 
      DirectoryIndex index.cgi
   </Directory>
</VirtualHosts>

and "activate" it:

% cd /etc/apache2/sites-enabled; ln -s ../sites-available/rrcloud

and make sure /etc/apacha2/mod-enabled/cgi.load exists.

Lighttpd

Add to your /etc/lighttpd/lighttpd.conf something like this:
$SERVER["socket"] == ":4468" {
   server.document-root = "/path/to/RepRapCloud/"
   server.reject-expect-100-with-417 = "disable"
   index-file.names = ( "index.cgi" )
   cgi.assign = ( ".cgi" => "/usr/bin/perl" )
}
The server.reject-expect-100-with-417 = "disable" are required for curl-based upload to work.

Web Access

Depending of the program (HTTP_USER_AGENT) rrcloud (respectively index.cgi) formats the output accordingly, e.g. a web-browser gets a nice formatted list (http://server.local:4468/),

whereas wget/curl or so gets a simple text list:

client: xxx.xxx.86.120
cmd: cp tasks/in/1361787153-873811.txt tasks/out/1361787153-093541
ctime: 1361787153.89647
id: 1361787153-093541
in: tasks/in/1361787152-863093.txt
out: tasks/out/1361787153-093541
pid: 32744
server: server.local
service: echo
status: busy

client: xxx.xxx.86.120
cmd: openscad tasks/in/1361787155-296870.scad -otasks/out/1361787155-774973.stl
ctime: 1361787155.83115
etime: 1361787155.89783
id: 1361787155-774973
in: tasks/in/1361787154-479659.scad
out: tasks/out/1361787155-774973.stl
pid: 32749
server: server.local
service: openscad
status: complete

...

You can also force that it returns JSON, e.g.

{
   args: "--load=tests/slic3r.conf tmp/cube.stl --output=tmp/cube.gcode",
   client: "xxx.xxx.86.120",
   cmd: "slic3r --load=tasks/in/1361787183-742842.conf tasks/in/1361787183-933412.stl --output=tasks/out/1361787183-011772.gcode",
   ctime: "1361787183.93071",
   etime: "1361787185.69113",
   id: "1361787183-011772",
   in: "tasks/in/1361787181-093430.conf,tasks/in/1361787181-792570.stl",
   out: "tasks/out/1361787183-011772.gcode",
   pid: "1062",
   server: "server.local",
   service: "slic3r",
   status: "complete",
}

which you can process with JQuery (see next chapter).

Since the main transportation layer is HTTP, you can use existing load-balancing software to distribute the tasks within one single IP.

Web API

The API is in its current form very simple:

Task Issuing (service: xyz)

HTTP POST with following variables:

service: service
fileInn: fileupload

and optionally:

notifier: url

whereas service: { openscad, openjscad, slic3r, printrun } etc (see later in this description how to query available services), and n: 0,1,2,3,... and the in case of the notifier, you can formulate the url as you wish, with two replacing variables:

$id -> task-id
$myip -> ip of client

CLI:

% curl -F service=openscad -F [email protected] http://service.local:4468/
% curl -F service=openscad -F 'notifier=http://$myip/done?$id' -F [email protected] http://service.local:4468/

JQuery:

$.post("http://server.local:4468/", 
   { service: 'openscad', fileIn0: '...', format: 'json' }).done(function(data) {
      var task = $.parseJSON(data);
   });

$.post("http://server.local:4468/",    // issue a task and respond back to when done
   { service: 'openscad', fileIn0: '...', notifier: 'http://$myip/done?$id', format: 'json' }).done(function(data) {
      var task = $.parseJSON(data);
   });

fileIn0..n can be an actual file-upload, or the content itself (e.g. a string containing .stl or .gcode direct).

HTTP Response (text/plain) will be the same response as "Task Info" (explained as next):

Task Info (service: info)

HTTP GET with following variables:

service: info
id: id           // (omit 'id:' and you get info on all tasks)

CLI:

% curl http://server.local:4468/?service=info&id=1361787155-774973

HTTP Response (text/plain):

client: ip       (your remote IP)
cmd: ...         (actual command run on server)
ctime: time      (creation time of task on server)
etime: time      (end time of task on server)
id: id           (id of task)
in: filelist     (comma separated list of filenames)
out: filename    (single filename of results)
notifier: url    (in case notifier is set, it's listed as such)
pid: pid         (process id on server)
server: ip       (server IP or hostname)
service: service (requested service)
status: status   (status: 'busy', 'failed', or 'complete')

text:

client: xxx.xxx.86.120
cmd: openscad tasks/in/1361787155-296870.scad -otasks/out/1361787155-774973.stl
ctime: 1361787155.83115
etime: 1361787155.89783
id: 1361787155-774973
in: tasks/in/1361787154-479659.scad
out: tasks/out/1361787155-774973.stl
pid: 32749
server: server.local
service: openscad
status: complete

json:

{
   client: "xxx.xxx.86.120",
   cmd: "openscad tasks/in/1361787155-296870.scad -otasks/out/1361787155-774973.stl",
   ctime: "1361787155.83115",
   etime: "1361787155.89783",
   id: "1361787155-774973",
   in: "tasks/in/1361787154-479659.scad",
   out: "tasks/out/1361787155-774973.stl",
   pid: "32749",
   server: "server.local",
   service: "openscad",
   status: "complete"
}

JQuery:

// request info on task
$.get("http://server.local:4468/", 
   { service: 'info', id: '1361787155-774973', format: 'json' }).done(function(data) {
      var task = $.parseJSON(data);
      task.status; // 'busy', 'failed', or 'complete'
      task.out;    // contains path of the result (if task.status=='complete')
      // -- your code to process results
   });

// get all tasks
$.get("http://server.local:4468/", 
   { service: 'info', format: 'json' }).done(function(data) {
      var tasks = $.parseJSON(data);
      for(var i=0; i<tasks.length; i++) {
         tasks[i].status; // 'busy', 'failed', or 'complete'
         tasks[i].out;    // contains path of the result (if tasks[i].status=='complete')
         // -- your code to process results
      }
   });

Server Info (service: meta)

HTTP GET with following variables:

service: meta

and provides results like:

cpuLoad: 1.91                 // current cpu load
maxDataRetention: 24          // max data retention [hrs]
serverName: brahma
services: echo,openjscad,openscad,povray,printrun,slic3r
tasks: 12                     // currently 12 tasks in pool
timeout: 1800                 // timeout [s]
uptime: 0d 06h 00m 52s        // uptime of server
version: RepRapCloud 0.017    // version of software

and if format=json is set, it is given in JSON format; see this example where AJAX technology is used to fetch the information.

Task Results

Based on the service: info retrieved out you can request the data direct:

GET http://server.local:4468/out

e.g.

GET http://server.local:4468/tasks/out/1361787155-774973.stl

Task Log

Based on the id you also can retrieve the log of the task:

GET http://server.local:4468/tasks/log/id

e.g.

GET http://server.local:4468/tasks/log/1361787155-774973

Note: the procedure of retrieval of the results and log of a task likely is to change soon.

Internal Command Composition

The services/*.conf define the services available on a server. Let us look at the slic3r.conf more closely:

path = /usr/bin:/usr/local/bin         # -- where to find slic3r executable
cmd = slic3r                           # -- the actual exectuable
argInput = --load=$fileIn              # -- possible additional input
fileOut = $id.gcode                    # -- how does the output file look like
output = --output=$fileOut             # -- actual argument composition for output

Now, the moment we issue a task, we have to set:

  • fileInn: the actual file-upload via POST
  • preargn: references 'fileInn' direct

for example for a task:

fileIn0: slic3r.conf
prearg0: --load=
fileIn1: test.stl

which gives then:

cmd prearg0+fileIn0 fileIn1 [output]

e.g.

slic3r --load=tasks/in/1361787183-742842.conf tasks/in/1361787183-933412.stl --output=tasks/out/1361787183-011772.gcode
cmd    prearg0+fileIn0                        fileIn1                        output

Note: If a preargn is set which doesn't fit the service.conf:argInput field it will be ignored (e.g. one could set 'prearg0=; do-something-not-approved' and hack the server).

Hint: This configuration and composition procedure is preliminary and might change later.

See Also

  • Netrap, distributed printing over several hosts
  • BotQueue V2, distributed slicing & printing over several hosts
  • OctoPrint, distributed printing from one host
  • SlicerHub, distributed slicing

That's all for now,

Rene K. Mueller
initial 2013/02/24, updated 2013/03/02

About

Simple local/remote framework to relay computational work like openscad or slicing to remote servers, and retrieve the results back locally.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages