Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New pwm_tool module that supports mcu queing of pwm updates #6369

Merged
merged 2 commits into from
Nov 17, 2023

Conversation

KevinOConnor
Copy link
Collaborator

This is an alternative implementation for #4128. The goal is to support fine grained control of spindles, lasers, and similar tools that are controlled by PWM input. The current output_pin module requires that each pin update have a minimum duration of 100ms, which is often not sufficient for these tools. This PR introduces a new pwm_tool module capable of queuing PWM updates in the micro-controller and is therefore capable of much higher update rates.

This differs from #4128 in the implementation. I'm leery that some of the low-level changes in #4128 may introduce a regression. This PR makes only a few changes outside of the new pwm_tool.py code. That is, the implementation here is mostly isolated to the new module and is therefore unlikely to introduce a regression to any users not using this module.

Unfortunately, I do not have a good setup to test this module. Feedback is appreciated.

@Cirromulus - FYI.

-Kevin

Copy link
Contributor

@Cirromulus Cirromulus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I like this MR.
It is concise in the stepcompress system and adds the pwm_tool as a klipper-extra which is nice (although that adds some redundancy to output_pin.py).

Real-world tests are still outstanding, but might get to this by this or the following weekend.

docs/Config_Reference.md Outdated Show resolved Hide resolved
m.flush_moves(mq_time)
else:
# Move queue will get flushed via regular toolhead flushing
self.last_kin_move_time = max(self.last_kin_move_time, mq_time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might use note_kinematic_activity(mq_time)

ffi_main, ffi_lib = chelper.get_ffi()
self._stepqueue = ffi_main.gc(ffi_lib.stepcompress_alloc(self._oid),
ffi_lib.stepcompress_free)
self._mcu.register_stepqueue(self._stepqueue)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor cosmetic: It is not really a step-queue.

In object-oriented C++ I would have suggested a general buffer management queue (e.g. command_queue), which the stepcompress system would inherit the queue from and add the compression part (i.e. step-dir-filter etc.)

Perhaps I am thinking too general, but the reason for this is that this might be a possible location for a hypothetical proportionality feature (i.e. S-Parameter in G0 in RepRap documentation).
After a stepcompress-flush, a toolhead might insert pwm_tool messages into the queue according to actual speed / requested speed and a gamma-curve.

But we might "cross the bridge once we get there".

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Earlier this year I spent some time reworking the trapq, stepcompress, and steppersync code. It's difficult to get all that code correct however, so I haven't gotten it to the point that it is ready to merge. Largely, the pwm_tool code is largely orthogonal, so likely best not to introduce additional dependencies to it.

SET_PIN PIN=test_pwm_tool VALUE=0.5
SET_PIN PIN=test_pwm_tool VALUE=0.5
SET_PIN PIN=test_pwm_tool VALUE=0.25
SET_PIN PIN=test_pwm_tool VALUE=1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other things worthy to test:

  • maximum_mcu_duration by activating the pin to some non-default value and starting a toolhead move that will take longer than this duration.
  • small rasterized image with some speed and reasonable resolution to test a "real" world example (perhaps the Klipper3D logo)

If wanted, I can provide these tests as I used something similar (but manually).

@@ -543,6 +543,14 @@ def register_lookahead_callback(self, callback):
last_move.timing_callbacks.append(callback)
def note_kinematic_activity(self, kin_time):
self.last_kin_move_time = max(self.last_kin_move_time, kin_time)
def note_move_queue_activity(self, mq_time):
if self.special_queuing_state == "Flushed":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During homing and force move, the special_queueing_state is set to "Drip".
While it is not recommended to have some pwm-tool in its non-default state during homing, it might still happen (e.g. queued GCODE).

If it is not flushed in this mode, it will still have an active PWM-Tool during homing, but will fail either because of maximum_mcu_duration or, after homing, due to the delayed schedule (timer_too_close).

At least, it happened to me with only update_move_time. I am not sure whether updating the last_kin_move_time in "Drip"-mode will lead to updates, because _update_move_time(kin_time) did not.

@KevinOConnor KevinOConnor force-pushed the work-pwmtool-20231013 branch from ed8f2f4 to bf07692 Compare October 14, 2023 16:41
@KevinOConnor
Copy link
Collaborator Author

Thanks for the detailed review.

I made some changes and rebased this branch.

Upon further thought, I'm not confident that the maximum_mcu_duration feature is correct on this PR. I have now separated out the code for it into a separate commit for review purposes, but I suspect it is not a good plan to add support for maximum_mcu_duration as it is currently implemented here.

The Klipper code attempts to plan out stepper movement two seconds in advance of when it is needed. That helps ensure that the micro-controllers have the necessary movement information fully buffered so that it can maintain precise timing. As part of this, the host prioritizes the filling of the move_queue and the serial bandwidth up to two seconds into the future.

Unfortunately, the maximum_mcu_duration code on this PR may attempt to send the "time extension" pwm update with only 100ms of time before it is needed on the mcu. The problem is, it's possible that the main kinematic planner may have filled the move_queue and queued up messages for the upcoming two seconds, such that an appended pwm update wont arrive within 100ms. This could lead to "timer too close" and "Missed scheduling of next digital out event" errors.

There is also a related problem that an appended pwm update with a deadline just 100ms in the future does not necessarily wake up the host flushing mechanism in time. (This is likely the reason you saw errors when in "Drip" mode.)

I'm not sure what a good way to address this issue is. Some possibilities:

  1. We could commit the code without maximum_mcu_duration. That could notably limit the utility of this code though.
  2. Some kind of mechanism separate from queued PWM updates could be used to ensure the host and mcu do not lose contact. The code for this could get pretty complicated though.
  3. We could alter the mcu move_queue flushing mechanism (mcu.py:flush_moves()) to check if a PWM time extension update is needed, and append that update just prior to flushing. This would effectively allow the PWM updates to stay in sync with the two seconds of kinematic pre-planning. However, some additional system would still be needed to ensure that the mcu continues to flush if a pwm_tool remains active while the main kinematic code is idle.

I'm not sure.

-Kevin

@Cirromulus
Copy link
Contributor

  1. We could commit the code without maximum_mcu_duration. That could notably limit the utility of this code though.
  2. Some kind of mechanism separate from queued PWM updates could be used to ensure the host and mcu do not lose contact. The code for this could get pretty complicated though.
  3. We could alter the mcu move_queue flushing mechanism (mcu.py:flush_moves()) to check if a PWM time extension update is needed, and append that update just prior to flushing. This would effectively allow the PWM updates to stay in sync with the two seconds of kinematic pre-planning. However, some additional system would still be needed to ensure that the mcu continues to flush if a pwm_tool remains active while the main kinematic code is idle.

Currently, number 2 is most appealing to me.
I think of a keepalive / timeout system, where the shortest maximum_mcu_duration is taken in where the toolhead (or other appropriate location) keeps track of the time the last message was sent, and inserts a keepalive / null message upon a nearing deadline.
The MCU on the other hand also tracks the time since the last received message and goes into failsafe when the shortest maximum_mcu_duration was reached. This way, for x PWM pins, the serial link does not need to be used for x updates per maximum_mcu_duration for redundant information as it currently would be.

This, however, holds only if Klippy's fail-mode is fail silent (which I believe it is).

There is also a related problem that an appended pwm update with a deadline just 100ms in the future does not necessarily wake up the host flushing mechanism in time. (This is likely the reason you saw errors when in "Drip" mode.)

This is probably the reason for it. I had a hunch, and thought of making a version of your suggestion Nr. 3, but I did not finish this idea because it felt hacky and added a lot of special-case code.

@KevinOConnor KevinOConnor force-pushed the work-pwmtool-20231013 branch from bf07692 to e6a75ed Compare October 14, 2023 23:34
@KevinOConnor
Copy link
Collaborator Author

FYI, I rebased this branch again.

Currently, number 2 is most appealing to me.

It's possible, but the issue with that approach is that it likely requires mcu code changes. That's a pain to deploy and may risk regressions.

A user could probably "hack" this today by declaring a dummy pin (or led or something innocuous) as an output_pin with its maximum_mcu_duration. Then as long as the user commanded that pin on during print start and off during print end, then they should get general host/mcu disconnect protection.

In general, I'd probably prefer approach 3. It's more work and likely requires some notable refactoring, but seems like less of a hack.

In any case, I suspect only option 1 is practical if we want to merge prior to the next Klipper release. https://klipper.discourse.group/t/upcoming-v0-12-0-release/10948

This, however, holds only if Klippy's fail-mode is fail silent

I'm not sure what you mean by "fail silent".

-Kevin

@Cirromulus
Copy link
Contributor

Cirromulus commented Oct 15, 2023

It's possible, but the issue with that approach is that it likely requires mcu code changes. That's a pain to deploy and may risk regressions.

This is correct. With this big of a user basis, I see the pain of deployment. I still find a timeout / keepalive system more elegant than having a separate "non-move" queue with a higher priority, but this discussion is probably more suited in the discourse development channel.
Until then, I agree with option 1 for this MR. My fork also did not support it correctly, so we would not loose anything.

A user could probably "hack" this today by declaring a dummy pin (or led or something innocuous) as an output_pin with its maximum_mcu_duration. Then as long as the user commanded that pin on during print start and off during print end, then they should get general host/mcu disconnect protection.

As my Howto suggests a separate enable-pin for power supply to the Laser / Tool, this would still fit well. The non-high-throughput pin can have the safety timeout, so that the PWM-pin does not need it. I suggest adding this to the sample-pwm-tool.cfg and the docs/Using_PWM_Tools.md.

I'm not sure what you mean by "fail silent".

Fail silent means that if something fails in klipper host, it does not continue to send (possibly bogus) messages to the MCU, but does not send anything at all.
A classic example is a live-lock, where the actual code is stuck but some other thread still sends (old) data.

@KevinOConnor KevinOConnor force-pushed the work-pwmtool-20231013 branch from e6a75ed to 41dd3e6 Compare October 20, 2023 20:18
@KevinOConnor
Copy link
Collaborator Author

Okay, thanks. I've rebased this branch to just include the basic [pwmtool] module without the maximum_mcu_duration support. (The previous maximum_mcu_duration, known to not work fully correct, code is up at https://github.com/KevinOConnor/klipper-dev/tree/work-pwmtool-20231020 ).

It would help if others with actual pwm controlled tools can confirm that the PR here works on that hardware and provides some utility.

-Kevin

@Cirromulus
Copy link
Contributor

I will test it with my laser setup tomorrow.

I still suggest adapting the example configuration to include the (regular) power supply pin:

diff --git a/config/sample-pwm-tool.cfg b/config/sample-pwm-tool.cfg
index 48986061..e4cbd45e 100644
--- a/config/sample-pwm-tool.cfg
+++ b/config/sample-pwm-tool.cfg
@@ -3,46 +3,46 @@
 # See docs/Using_PWM_Tools.md for a more detailed description.
 
 [pwm_tool TOOL]
-pin: !ar9       # use your fan's pin number
+pin: !ar9       # use your pin number connected to PWM input
 hardware_pwm: True
 cycle_time: 0.001
 shutdown_value: 0
-maximum_mcu_duration: 5
-# Default: 0 (disabled)
-# Amount of time in which the host has to acknowledge
-# a non-shutdown output value.
-# Suggested value is around 5 seconds.
-# Use a value that does not burn up your stock.
-# Please note that during homing, your tool
-# needs to be in default speed.
+
+[output_pin TOOL_ENABLE]
+pin: !ar10      # use the pin number connected to power supply of Tool
+shutdown_value: 0
+maximum_mcu_duration: 2
 
 [gcode_macro M3]
 gcode:
+    SET_PIN PIN=TOOL_ENABLE VALUE=1
     {% set S = params.S|default(0.0)|float %}
     SET_PIN PIN=TOOL VALUE={S / 255.0}
 
 [gcode_macro M4]
 gcode:
+    SET_PIN PIN=TOOL_ENABLE VALUE=1
     {% set S = params.S|default(0.0)|float %}
     SET_PIN PIN=TOOL VALUE={S / 255.0}
 
 [gcode_macro M5]
 gcode:
     SET_PIN PIN=TOOL VALUE=0
+    SET_PIN PIN=TOOL_ENABLE VALUE=0
 
 
 # Optional: LCD Menu Control
 
 [menu __main __control __toolonoff]
 type: input
-enable: {'pwm_tool TOOL' in printer}
-name: Fan: {'ON ' if menu.input else 'OFF'}
+enable: {'output_pin TOOL_ENABLE' in printer}
+name: Tool: {'ON ' if menu.input else 'OFF'}
input: {printer['output_pin TOOL_ENABLE'].value}
 input_min: 0
 input_max: 1
 input_step: 1
 gcode:
-    M3 S{255 if menu.input else 0}
+    SET_PIN PIN=TOOL_ENABLE VALUE={1 if menu.input else 0}
 
 [menu __main __control __toolspeed]
 type: input
diff --git a/docs/Using_PWM_Tools.md b/docs/Using_PWM_Tools.md
index 108ae37a..98e9f91c 100644
--- a/docs/Using_PWM_Tools.md
+++ b/docs/Using_PWM_Tools.md
@@ -21,12 +21,14 @@ _fully on_ for the time it takes the MCU to start up again.
 For good measure, it is recommended to _always_ wear appropriate
 laser-goggles of the right wavelength if the laser is powered;
 and to disconnect the laser when it is not needed.
-Also, you should configure a safety timeout,
+An additional safety layer would be powering your tool through the hotend-heater pin.
+In the configuration, the additional `[output_pin]` would be activated by `M3/M4` and disabled by `M5`.
+In this pin you should configure a safety timeout,
 so that when your host or MCU encounters an error, the tool will stop.
 
 For an example configuration, see [config/sample-pwm-tool.cfg](/config/sample-pwm-tool.cfg).
 
-## Commands
+## Example Commands
 
 `M3/M4 S<value>` : Set PWM duty-cycle. Values between 0 and 255.
 `M5` : Stop PWM output to shutdown value.
@@ -36,26 +38,29 @@ For an example configuration, see [config/sample-pwm-tool.cfg](/config/sample-pw
 If you use Laserweb, a working configuration would be:
 
     GCODE START:
-        M5            ; Disable Laser
         G21           ; Set units to mm
         G90           ; Absolute positioning
         G0 Z0 F7000   ; Set Non-Cutting speed
 
     GCODE END:
         M5            ; Disable Laser
-        G91           ; relative
-        G0 Z+20 F4000 ;
-        G90           ; absolute
 
     GCODE HOMING:
         M5            ; Disable Laser
         G28           ; Home all axis
 
     TOOL ON:
-        M3 $INTENSITY
+        M3
 
     TOOL OFF:
-        M5            ; Disable Laser
+        M3 S0         ; No output, but powered
 
     LASER INTENSITY:
-        S
+        M3 S
+
+    PWM MIN S VALUE:
+        0
+
+    PWM MAX S VALUE:
+        255

Copy link

github-actions bot commented Nov 5, 2023

Thank you for your contribution to Klipper. Unfortunately, a reviewer has not assigned themselves to this GitHub Pull Request. All Pull Requests are reviewed before merging, and a reviewer will need to volunteer. Further information is available at: https://www.klipper3d.org/CONTRIBUTING.html

There are some steps that you can take now:

  1. Perform a self-review of your Pull Request by following the steps at: https://www.klipper3d.org/CONTRIBUTING.html#what-to-expect-in-a-review
    If you have completed a self-review, be sure to state the results of that self-review explicitly in the Pull Request comments. A reviewer is more likely to participate if the bulk of a review has already been completed.
  2. Consider opening a topic on the Klipper Discourse server to discuss this work. The Discourse server is a good place to discuss development ideas and to engage users interested in testing. Reviewers are more likely to prioritize Pull Requests with an active community of users.
  3. Consider helping out reviewers by reviewing other Klipper Pull Requests. Taking the time to perform a careful and detailed review of others work is appreciated. Regular contributors are more likely to prioritize the contributions of other regular contributors.

Unfortunately, if a reviewer does not assign themselves to this GitHub Pull Request then it will be automatically closed. If this happens, then it is a good idea to move further discussion to the Klipper Discourse server. Reviewers can reach out on that forum to let you know if they are interested and when they are available.

Best regards,
~ Your friendly GitIssueBot

PS: I'm just an automated script, not a human being.

@tregnier
Copy link

tregnier commented Nov 5, 2023

Hi @Cirromulus and @KevinOConnor,

First of all thanks for your work on that upgrade!

I've been trying to use it with my setup, but I'm having some issue Internal error on command:"G1" when trying to run Gcode from laserburn or snapmaker
I feel it's getting too many command queued, but I really have no knowledge to tell.

Below 2 klippy logs taken right after the crash happens, if it helps to review the feature.

klippy(5).log
klippy(4).log

I have a laser and can try to run some test if you want, to review the feature (not only related to my issue) so it can eventually be integrated with the main branch.

I can't get the laser working right now, but the printing setup works fine. I've created 2 config file, printer.cfg and laser.cfg, so I deactivate the laser config when printing normally.

@Cirromulus
Copy link
Contributor

The relevant portion of your log is the following:

Traceback (most recent call last):
  File "/home/biqu/klipper/klippy/extras/virtual_sdcard.py", line 264, in work_handler
    self.gcode.run_script(line)
  File "/home/biqu/klipper/klippy/gcode.py", line 216, in run_script
    self._process_commands(script.split('\n'), need_ack=False)
  File "/home/biqu/klipper/klippy/gcode.py", line 198, in _process_commands
    handler(gcmd)
  File "/home/biqu/klipper/klippy/extras/gcode_move.py", line 143, in cmd_G1
    self.move_with_transform(self.last_position, self.speed)
  File "/home/biqu/klipper/klippy/extras/bed_mesh.py", line 209, in move
    self.toolhead.move([x, y, z + self.fade_target, e], speed)
  File "/home/biqu/klipper/klippy/toolhead.py", line 427, in move
    self.move_queue.add_move(move)
  File "/home/biqu/klipper/klippy/toolhead.py", line 187, in add_move
    self.flush(lazy=True)
  File "/home/biqu/klipper/klippy/toolhead.py", line 176, in flush
    self.toolhead._process_moves(queue[:flush_count])
  File "/home/biqu/klipper/klippy/toolhead.py", line 331, in _process_moves
    cb(next_move_time)
  File "/home/biqu/klipper/klippy/extras/pwm_tool.py", line 154, in <lambda>
    lambda print_time: self._set_pin(print_time, value))
  File "/home/biqu/klipper/klippy/extras/pwm_tool.py", line 143, in _set_pin
    self.mcu_pin.set_pwm(print_time, value)
  File "/home/biqu/klipper/klippy/extras/pwm_tool.py", line 106, in set_pwm
    ret = self._stepcompress_queue_mq_msg(self._stepqueue, clock,
OverflowError: integer 29 622 610 503 does not fit 'uint32_t'

This does not seem to be the clock value, but the "setvalue" v should be clamped to 0-1.
Does this also happen when using software PWM?

@tregnier
Copy link

tregnier commented Nov 6, 2023

Just tried with software PWM (set hardware_pwm = False), and I got the same issue

I'm using Biqu CB1 instead of Rpi by the way, not sure it could have an effect?

@KevinOConnor KevinOConnor force-pushed the work-pwmtool-20231013 branch from 41dd3e6 to b2a9308 Compare November 8, 2023 19:07
@KevinOConnor
Copy link
Collaborator Author

Thanks. That looks like a bug in the new pwm_tool.py code. I have updated the code and rebased this branch. Hopefully this issue is now fixed.

-Kevin

@Cirromulus
Copy link
Contributor

Cirromulus commented Nov 9, 2023

I have successfully tested the newest version (b2a9308) to work with my GT2560 Mainboard.
On higher workloads, my maximum_mcu_duration target of 3 seconds can't be always met in the (laser) power pin, but I do have a very weak MCU and CPU (orange pi zero). I saw a slightly different behavior in the fork, where it usually tended to fail with timer too close first.
But it does work well in all (my) scenarios that already worked in the fork.

@tregnier
Copy link

looks like it works for me as well with the new code
I have issue to have proper engraving, but this is due to my laser, I need to order a new one.
No more error at least

Thanks!

The output_pin module is only capable of updating an output pin at
most once every 100ms.  Add a new pwm_tool module that is capable of
queuing updates in the micro-controller and thus allowing for much
higher update rates.

Signed-off-by: Kevin O'Connor <[email protected]>
@KevinOConnor KevinOConnor merged commit 29b7550 into master Nov 17, 2023
2 checks passed
@KevinOConnor KevinOConnor deleted the work-pwmtool-20231013 branch November 17, 2023 03:08
@KevinOConnor
Copy link
Collaborator Author

Thanks. I committed this branch. @Cirromulus - if you'd like a different config example, feel free to send it along.

I'd like to try to get to better "trapq flushing" in the next month or so, which should hopefully allow us to reintroduce the "maximum duration" check.

-Kevin

@A-Panic
Copy link

A-Panic commented Nov 19, 2023

Hi everyone, I'm trying to set up a Melzi 1.1.4 board (8bit atmega1284p) to take advantage of this new feature.
At the moment the laser is not yet plugged to the mainboard, I'm using a pwm fan as a test device.
Upon manually sending M3 S255 nothing seems to happen, then after sending another M3/M4/M5 command I get an MCU 'mcu' shutdown: Timer too close. Host side I'm using a BTT Pi v1.2 running only klipper, the error suggests that the problem could be on the host side, but as far as I can tell everything seems ok. I used these same host, cable and PSU with my printer for a while with no problem.
I'm attaching a klippy log taken after the error

klippy(4).log

I don't really have the knowledge to understand where the problem is, so if there's need for any other information or test I will try to help as soon as possible.

Thank you for this awesome firmware and for this feature

@willpuckett
Copy link
Contributor

This is happening a good bit with my laser as well when sending commands manually, but it works great in the context of a fully printed gcode file. I'm not sure if the console commands are queued differently or this is an expected part of the max duration issue mentioned in the previous post…

@A-Panic
Copy link

A-Panic commented Nov 20, 2023

@willpuckett thank you for your feedback, later today I'll try to run a job instead of manual commands and see what happens

@A-Panic
Copy link

A-Panic commented Nov 20, 2023

Hi everyone,

As @willpuckett suggested (thank you!) running a gcode file do not give any error also for me.
Anyway I switched from the fan to the actual laser before running these tests and still there is no actual change in the output: while the file is "printing" the toolhead moves, but the laser stays off.
At this point the problem may be in the physical connection or in a bad/lacking configuration in [pwm_tool TOOL]

  • The laser is connected to the part cooling fan pin (PB4), which shloud be capable of hardware-PWM (hardware_pwm set to true or false does not change the result);
  • I'm using Fluidd and [virtual_sdcard] is enabled by default (and as far as I understand, mandatory). Does this interfere in the command queue process? Again, I'm just giving more infos and context as I'm not sure how this feature actually works;
  • This is the connector on my laser module:
    immagine
    Do I need to set cycle_time or scale to a specific value to match that 1kHz on the PWM pin?

Sorry for all the questions and thank you in advance

@KevinOConnor
Copy link
Collaborator Author

Upon manually sending M3 S255 nothing seems to happen, then after sending another M3/M4/M5 command I get an MCU 'mcu' shutdown: Timer too close.

This is a defect in the new pwm_tool code. Issuing a command when the printer is otherwise idle does not flush out updates. (In contrast, if the printer is in an "active" state, it should flush the updates.) This is similar to the problem identified earlier with maximum_mcu_duration. Unfortunately, there does not seem to be a simple fix to this issue.

I'll have to think about a solution to this issue.

-Kevin

@firatsezer
Copy link

firatsezer commented Nov 21, 2023

MKS Robin Nano Mini v3 is not working with Master, before i used the repo from Cirromulus and the same config has not worked.
i adapted it a little bit to this but nothing work. the laser doesnt appear.

currently pwm.cfg, commented parts not included:

[pwm_tool LASER]
pin: PB0
hardware_pwm: True
shutdown_value: 0

with Cirromulus, i declared it as output_pin and everything has worked as expected.

I dont need enable the tool it get power from the mainline of print head. i have a TevoUp Hydra 2 in 1

klippy (1).log

@Cirromulus
Copy link
Contributor

This is a defect in the new pwm_tool code. Issuing a command when the printer is otherwise idle does not flush out updates. (In contrast, if the printer is in an "active" state, it should flush the updates.) This is similar to the problem identified earlier with maximum_mcu_duration. Unfortunately, there does not seem to be a simple fix to this issue.

I'll have to think about a solution to this issue.

In my branch, I check for this case (idle) based on the kin_time and flush it with self.note_kinematic_activity(time):
https://github.com/Klipper3d/klipper/pull/4128/files#diff-48322802b4775fb953d78d2928004fc887f50efead57b6d599e3045a05a90b45R557-R569

@A-Panic
Copy link

A-Panic commented Nov 21, 2023

Hi everyone,

Today I managed to make my laser module work. It turns out that the PWM input my module need is at 5V instead of 12V, so after setting a different pin the laser started to work.

Thanks to you all, if you need to run some tests count me in

@KevinOConnor
Copy link
Collaborator Author

KevinOConnor commented Nov 22, 2023

In my branch, I check for this case (idle) based on the kin_time and flush it with self.note_kinematic_activity(time):

Alas, calling self.note_kinematic_activity() is also insufficient to flush updates. In addition to setting self.last_kin_move_time the toolhead would also need to make sure the flush timer is active (self.flush_timer).

-Kevin

EDIT: To be clear, your branch also calls self._update_move_time(kin_time) which will flush the messages, but I'm unsure of the impact a proactive flush like that may have on other code.

@KevinOConnor
Copy link
Collaborator Author

I believe commit #6410 should fix the "Timer too close" errors reported here.

-Kevin

@KevinOConnor
Copy link
Collaborator Author

FYI, I have created a new development branch at https://github.com/KevinOConnor/klipper-dev/tree/work-pwmflush-20231130 with initial support for maximum_mcu_duration with pwm_tool.

That development branch is based on the code from #6410. It's tricky covering all the "corner cases" during flushing. However, I'm hopeful the above development branch is correct.

If all goes well with #6410 then I'll create a PR for the above development branch.

-Kevin

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants