Main

Parallel Computing in Optimization

Main.ParallelComputing History

Hide minor edits - Show changes to output

Deleted lines 50-51:
%width=500px%Attach:parallel_gekko.png
Added lines 190-193:

%width=500px%Attach:parallel_gekko.png

----
Deleted lines 3-4:

!!!! Parallel Computing
Added lines 13-14:
(:toggle hide multithread button show="Show Multithreading in Python":)
(:div id=multithread:)
Added line 47:
(:divend:)
Added lines 49-190:

%width=500px%Attach:parallel_gekko.png

(:toggle hide gekko button show="Show GEKKO Python Solution":)
(:div id=gekko:)
(:source lang=python:)
import numpy as np
import threading
import time, random
from gekko import GEKKO

class ThreadClass(threading.Thread):
    def __init__(self, id, server, ai, bi):
        s = self
        s.id = id
        s.server = server
        s.m = GEKKO()
        s.a = ai
        s.b = bi
        s.objective = float('NaN')

        # initialize variables
        s.m.x1 = s.m.Var(1,lb=1,ub=5)
        s.m.x2 = s.m.Var(5,lb=1,ub=5)
        s.m.x3 = s.m.Var(5,lb=1,ub=5)
        s.m.x4 = s.m.Var(1,lb=1,ub=5)

        # Equations
        s.m.Equation(s.m.x1*s.m.x2*s.m.x3*s.m.x4>=s.a)
        s.m.Equation(s.m.x1**2+s.m.x2**2+s.m.x3**2+s.m.x4**2==s.b)

        # Objective
        s.m.Obj(s.m.x1*s.m.x4*(s.m.x1+s.m.x2+s.m.x3)+s.m.x3)

        # Set global options
        s.m.options.IMODE = 3 # steady state optimization
        s.m.options.SOLVER = 1 # APOPT solver

        threading.Thread.__init__(s)

    def run(self):

        # Don't overload server by executing all scripts at once
        sleep_time = random.random()
        time.sleep(sleep_time)

        print('Running application ' + str(self.id) + '\n')

        # Solve
        self.m.solve(disp=False)

        # Results
        #print('')
        #print('Results')
        #print('x1: ' + str(self.m.x1.value))
        #print('x2: ' + str(self.m.x2.value))
        #print('x3: ' + str(self.m.x3.value))
        #print('x4: ' + str(self.m.x4.value))

        # Retrieve objective if successful
        if (self.m.options.APPSTATUS==1):
            self.objective = self.m.options.objfcnval
        else:
            self.objective = float('NaN')

# Select server
server = 'http://byu.apmonitor.com'

# Optimize at mesh points
x = np.arange(20.0, 30.0, 2.0)
y = np.arange(30.0, 50.0, 2.0)
a, b = np.meshgrid(x, y)

# Array of threads
threads = []

# Calculate objective at all meshgrid points

# Load applications
id = 0
for i in range(a.shape[0]):
    for j in range(b.shape[1]):
        # Create new thread
        threads.append(ThreadClass(id, server, a[i,j], b[i,j]))
        # Increment ID
        id += 1
       
# Run applications simultaneously as multiple threads
# Max number of threads to run at once
max_threads = 8
for t in threads:
    while (threading.activeCount()>max_threads):
        # check for additional threads every 0.01 sec
        time.sleep(0.01)
    # start the thread
    t.start()

# Check for completion
mt = 3.0 # max time
it = 0.0 # incrementing time
st = 1.0 # sleep time
while (threading.activeCount()>=1):
    time.sleep(st)
    it = it + st
    print('Active Threads: ' + str(threading.activeCount()))
    # Terminate after max time
    if (it>=mt):
        break

# Wait for all threads to complete
#for t in threads:
#    t.join()
#print('Threads complete')

# Initialize array for objective
obj = np.empty_like(a)

# Retrieve objective results
id = 0
for i in range(a.shape[0]):
    for j in range(b.shape[1]):
        obj[i,j] = threads[id].objective
        id += 1

# plot 3D figure of results
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np

fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(a, b, obj, \
                      rstride=1, cstride=1, cmap=cm.coolwarm, \
                      vmin = 12, vmax = 22, linewidth=0, antialiased=False)
ax.set_xlabel('a')
ax.set_ylabel('b')
ax.set_zlabel('obj')
ax.set_title('Multi-Threaded GEKKO')
plt.show()
(:sourceend:)
(:divend:)
Changed lines 7-8 from:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB or Python script. In Python, parallelization is accomplished with multi-threading.
to:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB or Python script.

!!!! Multithreading in
Python

In Python
, parallelization is accomplished with multithreading. The following example shows how to create and run a program with 10 threads that each print a message.
Changed lines 26-28 from:
       print "ID => %s: %s completes at %s\n" % \
              (self.id, self.getName(), now)
to:
       print("ID => %s: %s completes at %s\n" % \
              (self.id, self.getName(), now))
Changed lines 34-35 from:
   print 'Active threads: ' + str(threading.activeCount())
to:
   print('Active threads: ' + str(threading.activeCount()))
Changed lines 37-39 from:
print 'All threads: \n'
print threading.enumerate()
to:
print('All threads: \n')
print(threading.enumerate())
Changed line 43 from:
print 'Threads complete'
to:
print('Threads complete')
Changed lines 7-42 from:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB or Python script. The tutorial begins with a simple Nonlinear Programming problem. The tutorial examples are available for download below:
to:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB or Python script. In Python, parallelization is accomplished with multi-threading.

(:source lang=python:)
import threading
import datetime
import time, random

class MyThread(threading.Thread):
    def __init__(self, id):
        self.id = id
        self.delay = random.random()
        threading.Thread.__init__(self)
    def run(self):
        time.sleep(self.delay)
        now = datetime.datetime.now()
        print "ID => %s: %s completes at %s\n" % \
              (self.id, self.getName(), now)

# Start threads
threads = []
for i in range(10):
    threads.append(MyThread(i))
    threads[i].start()
    print 'Active threads: ' + str(threading.activeCount())

# Print threads
print 'All threads: \n'
print threading.enumerate()

# Wait for all threads to complete
for t in threads:
    t.join()
print 'Threads complete'
(:sourceend:)

The next step is to embed a simple Nonlinear Programming (NLP) problem into the multi-threaded application. The tutorial examples are available for download below:
April 07, 2016, at 11:11 PM by 10.5.113.123 -
Changed lines 12-13 from:
* [[Attach:parallel_computing_with_apm_python.zip|Attach:parallel_computing_with_apm_python.png]]
to:
-> [[Attach:parallel_computing_with_apm_python.zip|Attach:parallel_computing_with_apm_python.png]]
Changed line 17 from:
* [[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm.png]]
to:
-> [[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm.png]]
January 21, 2013, at 07:40 AM by 69.169.188.188 -
Changed line 12 from:
* [[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm_python.png]]
to:
* [[Attach:parallel_computing_with_apm_python.zip|Attach:parallel_computing_with_apm_python.png]]
January 21, 2013, at 07:39 AM by 69.169.188.188 -
Changed line 7 from:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB interface. The tutorial begins with a simple Nonlinear Programming problem. The tutorial examples are available for download below:
to:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB or Python script. The tutorial begins with a simple Nonlinear Programming problem. The tutorial examples are available for download below:
January 21, 2013, at 07:35 AM by 69.169.188.188 -
Changed lines 7-11 from:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB interface. The tutorial begins with a simple Nonlinear Programming problem that is formulated in two different ways. The tutorial example and other files are available for download below:

* [[Attach:
parallel_computing_with_apm.zip|Parallel Computing Example Files (.zip)]]

[[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm.png]]
to:
APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB interface. The tutorial begins with a simple Nonlinear Programming problem. The tutorial examples are available for download below:

----

* [[Attach:parallel_computing_with_apm_python.zip|Parallel Computing Example Files for APM Python (.zip)]]
* [[Attach
:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm_python.png]]

----

*
[[Attach:parallel_computing_with_apm.zip|Parallel Computing Example Files for APM MATLAB (.zip)]]
* [[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm.png]]

----
January 19, 2013, at 06:32 AM by 69.169.188.188 -
Changed line 14 from:
<iframe width="560" height="315" src="http://www.youtube.com/embed/GB0NYz-k8ZM?rel=0" frameborder="0" allowfullscreen></iframe>
to:
<iframe width="560" height="315" src="http://www.youtube.com/embed/Hr-d_yHKPn4?rel=0" frameborder="0" allowfullscreen></iframe>
January 19, 2013, at 12:43 AM by 128.187.97.21 -
Added lines 9-10:
* [[Attach:parallel_computing_with_apm.zip|Parallel Computing Example Files (.zip)]]
Changed lines 13-16 from:
* [[Attach:parallel_computing_with_apm.zip|Parallel Computing Example Files (.zip)]]


to:
(:html:)
<iframe width="560" height="315" src="http://www
.youtube.com/embed/GB0NYz-k8ZM?rel=0" frameborder="0" allowfullscreen></iframe>
(:htmlend:
)
January 19, 2013, at 12:40 AM by 128.187.97.21 -
Added lines 1-33:
(:title Parallel Computing in Optimization:)
(:keywords parallel computing, mathematical modeling, nonlinear, optimization, engineering optimization, interior point, active set, differential, algebraic, modeling language, university course:)
(:description Tutorial on using MATLAB to solve parallel computing optimization applications.:)

!!!! Parallel Computing

APM is configured to run on heterogeneous networks and platforms. In this example application, we solve a series of optimization problems using Linux and Windows servers. The optimization problems are transferred to the servers in parallel, computed in parallel, and returned asynchronously to the MATLAB interface. The tutorial begins with a simple Nonlinear Programming problem that is formulated in two different ways. The tutorial example and other files are available for download below:

[[Attach:parallel_computing_with_apm.zip|Attach:parallel_computing_with_apm.png]]

* [[Attach:parallel_computing_with_apm.zip|Parallel Computing Example Files (.zip)]]




----

(:html:)
 <div id="disqus_thread"></div>
    <script type="text/javascript">
        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
        var disqus_shortname = 'apmonitor'; // required: replace example with your forum shortname

        /* * * DON'T EDIT BELOW THIS LINE * * */
        (function() {
            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
            dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js';
            (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
        })();
    </script>
    <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
    <a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
(:htmlend:)