Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Durations other than 1.0 seem to break learning #2

Open
BenBlumer opened this issue Mar 29, 2014 · 2 comments
Open

Durations other than 1.0 seem to break learning #2

BenBlumer opened this issue Mar 29, 2014 · 2 comments

Comments

@BenBlumer
Copy link

The learning seems to get butchered for any duration other than 1.

For example:

if __name__ == '__main__':

  import pylab as plt
  from plot_tools import plot_pos_vel_acc_trajectory

  # only Transformation system (f=0)
  dmp = DiscreteDMP()
  end_time = 1 # Definitely don't change this. 
  frequency = 1000 # Changing this also seems to break things.


  trajectory_time_points = np.linspace(0, end_time, end_time * frequency)
  print "Time begins with %d and ends with %d" %(trajectory_time_points[0], trajectory_time_points[-1])
  trajectory_y_values = [np.sin(10*t) for t in trajectory_time_points]
  dmp.setup(trajectory_y_values[0], trajectory_y_values[-1], end_time)
  dmp.learn_batch(trajectory_y_values, frequency)

  traj = []
  for x in range(end_time * frequency):
    #if x == 500:
    #  dmp.goal = 4.0
    dmp.run_step()
    traj.append([dmp.x, dmp.xd, dmp.xdd])

  fig = plt.figure('f=0 (transformation system only)', figsize=(10, 3))
  ax1 = fig.add_subplot(131)
  ax2 = fig.add_subplot(132)
  ax3 = fig.add_subplot(133)
  plot_pos_vel_acc_trajectory((ax1, ax2, ax3), traj, dmp.delta_t, label='DMP $f=0$', linewidth=1)

  fig.tight_layout()

  plt.show()

Produces a nice sin trajectory, but changing " end_time = 1" to "end_time = 2" produces this:
butcheredtraj.

@BenBlumer BenBlumer changed the title Question: durations other than 1. Durations other than 1.0 seem to break learning Mar 29, 2014
@BenBlumer
Copy link
Author

I can get a better trajectory by changing line 180 to "dt = duration / n_sampled" and manually setting tau before learning "dmp.tau = end_time ". But the trajectory still doesn't reach the amplitude that I suspect it should.

better_traj

Any thoughts?

@carlos22
Copy link
Owner

carlos22 commented Apr 2, 2014

I think i discovered the same once. The solution was to transform the trajectory to fit between 0 and 1 and back again after learning. You may find something in my masters thesis [1], but definitly in some of the cited litrature. Good Luck!

[1] http://amser.hs-weingarten.de/cms/administrator/components/com_intranet/uploads/paper/134/Masterthesis.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants