You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My aim is for this to be a comprehensive explanation of how the app works, i.e. how we make code-as-source-of-truth a reality.
Here are the most important parts in a nutshell. The way to read it is with the exception of the blue rectangles, the rectangles represent data, whereas the ovals manipulate that data. (the engine isn't included here as the engineConnectionManager is there to abstract that)
AST🤖/Code🧑💻 pair
The sourceCode and the AST together is the source-of-truth for the app, and so changing from one type to another has to be ironclad. That is taking any given AST, casting to code, then reparsing to an AST should give you the same AST, and like wise for code->AST->code. (the one exception here is we only expect the AST to have accurate source ranges when it's been derived from code).
The one sentence explanation of what an AST even is would be "it's a nested data structure that represents the code", but really the best way to get a feel for it if you not familiar with the concept is to:
play around with our AST explorer (sidequest)Screenshare.-.2023-09-21.3_59_33.PM.mp4
Besides being a standard part of practically all modern programming languages, it's especially useful to us because changing code programmatically wouldn't be scaleable with simple string manipulation. A good way to think about how these two work in tandem is that when a human wants to edit the code, they do so directly by typing in our text editor, when we want to edit the code using our app logic, we do so by updating the AST, and recast to code for the human. That is the code is the source-of-truth for the human, and the AST is the source of truth for robots.
The executor and the stdlib
The executor is ofc what runs the code, it does so by traversing through the AST creating programMemory is it goes (technical name is a tree-walk interpreter)
So in a simple case of
constmyVar=5constmyOtherVar=myVar+3
It creates programMemory like
{
myVar: 5
myOtherVar: 8}
The EngineConnectionManager, (mapping between engine-calls and sourceRanges)
But the way this actually produces 3d geometry is through our stblib and a websocket connection to the KittyCAD engine
Each one of our stdlib functions that needs to send a command to the engine, does so vias the engineConnectionManager, it passes along an id, and should receive a result from the engine. For the most part the result from the engine is not heavily used. For example in the case of a line(...) function call, we only want to know that the creation of that line was successful, though later we'll have stdlib functions that query existing geometry (e.g. getExtrudeVolume) where the result will be important.
But the engineConnectionManager does much more than this
It maintains a mapping between information important to the UI and information important to the engine. That mostly means associating the cmd_id with metadata related to execution and the AST, in particular the mapping between cmd_id and sourceRange is very important for a number of reasons, first of which when putting your cursor on a line of code that was responsible for creating something in the scene, we want to highlight that thing in the scene. In the video while faint you can see the line segment change as the cursor moves, this includes multiple cursors highlighting multiple segments.
Screenshare.-.2023-09-21.7_52_51.PM.mp4
Introducing the UI to our mental model in blue
This segment highlighting on cursor position is made possible by the mapping that the engineConnectionManager (the arrow going to the streamedUI is dotted as the engineConnectionManager communicates which ids to highlight and it will appear that way in the stream)
This also works the other way where hovering over a segment will highlight the code responsible, and clicking will select the line by putting the cursor in that part of the code (yes, selections in the app are mostly just cursor positions), this works with multiple selections with multiple cursors by holding the shift key (note this example with segments currently only works in edit mode).
Screenshare.-.2023-09-21.8_37_49.PM.mp4
This works as the app sends back the user's mouse pixel-coordinates, and other mouse events like clicks, and in turn, the engine responds with the ids of any scene entities that have either been hovered or clicked on, and the engineConnectionManager is able to resolve these back to source ranges
What we've gone over so far is basically how go from code to execution, and how after execution we're able to associate artifacts in the 3d-scene with specific sections of code. What we haven't talk about is how the code is generated from UI-interactions. The above has been important foundation selecting things in the scene and having that resolve no sourceRanges is needed because sourceRanges is what we use as hooks surgically to query and modify the AST.
These two steps: Query: The ast to see if the mode is possible (this includes cursor-position/sourceRange and other app state like being in sketchmode for example), Modify: Pulling the trigger on ast-mods if the query step is okay.
It's best to go through examples for this. A simple one would be when a user wants to add a horizontal constraint to a segment. In most cases this means taking an expression like |> line([2, 3], %) and changing it to |> xLine(2, %), the process is:
The user selects the segment they're interested in, this puts the cursor on the line of code
We fire a query to make sure the mode is possible, (we'll get into the details later for how this works with constraints), an example of code that can't be transformed might be |> yLine(3, %) as its already constrained or if the code is entirely unrelated const myVar = 5
If the query says it can be transformed, we enable the ast-mod action and the user can fire it (from the cmd-bar, button or hotkey)
The code is updated and re-executes
Screenshare.-.2023-09-22.11_56_56.AM.mp4
With that appetizer example out of the way.
Example workflow
Note these examples don't follow 1:1 with the workflow currently available in the app.
A User opens the app with an empty scene, they can rotate, pan, zoom.
The go to start a sketch. Once they do that we show them the default planes, There is no code generated at this point, the FE will make engine calls to create these planes and hang onto their ids in app memory. A user selects one of the planes, at which point we can generate some code, for readability startSketchOnPlane should take xy, yz, xz but we'll accommodate arbitrary planes too.
Adding the profile origin
The user is now in sketchMode for that plane, and can select a tool, they select the line segment tool. The user now clicks on the sketch plane with the tool, The FE sends an event to the engine about where in the stream the user clicked (xy-pixels) and in turn, the engine responds with what happened (xy of the click on the sketch plane).
The ast for an empty editor looks like this
{start: 0,end: 0,body: [],}
that is an empty body, no expressions yet, so the last-mod for starting a sketch is going to be to add an expression to the end of the body. The code we want to generate is const part01 = startSketchOnPlane([x, y]) |> startProfileAt([x2,y2], %)
that is a variable declaration const part01 that is initialised with a callExpression startSketchAt(...) that has one argument that's an array expression with the xy coords [1, 1]. I'll pop in the ast for this expression that needs to be added to the programs body property, but because it's verbose I won't keep doing it throughout this issue
but really [createLiteral(x), createLiteral(y)] with xy from the engine event.
Once we've added this expression to the ast, we then recast ast back to code so that it shows up in the editor in human-readable form, The generated code is then converted back to the ast and we execute from the ast.
The reason why goes modifiedAst -> code -> ast is because each node has startend source range properties, and exactly how many characters will get added is only known by the recaster, we need to do this loop to get accurate start-end values.
Another thing to note is that when we started a new sketch, there was no source-range to use as a hook to know what to modify in the AST, instead we were adding a new expression to the end of the body. Now that we have some code, we'll keep the source range of this variable declaration in our app-state as the thing we're currently editing.
To recap, what's happened here?: The user has done a couple of things related to the 3d scene, by having the FE send back these click events etc, the engine can respond with events that are related to the 3d-scene of which the FE doesn't have context on. These events are then used to produce code, In this case because the engine and the app are working in lockstep we shouldn't need to re-execute to produce the 3d scene artifacts. But for the sake of explanation let's look at execution each time.
With execution we produce artifacts that exist in the 3d-scene by executing the generated code. In this case the CallExpression startProfileAt being part of the stblib makes an engine-call to create this start point so it can be displayed on the screen (let's go with this 3 line crosshair thing so as not to confuse it with plane origin)
Now that code has caused something to appear in the 3d scene, we should recall that the way that the engine and the app keep their own representations of the entities created by code execution is with ids, where the engineConnectionManager keeps a mapping of those ids to other code metadata.
Let's look at the result of the execution of our code so far, that is the program memory. What startProfileAt creates is a sketchGroup, ostensibly this is an array of values for each segment but also has a start property for indicating where the sketch starts, for our sample code the program memory looks like
In the above the value is an empty array but we do have the start property populated. This start value has an id, this is the id that it sent to the engine when creating the sketch profile origin. And again Having this id is how we power hover and clicks in the 3d scene to source ranges in the editor.
Animating draft segments and adding segments
Back to where we left off in terms of UX the user had selected the sketch start point, but they've still got the line segment tool equipped, so the engine will continue to animate draft segments (dash line)
Then when the user clicks it will follow the same pattern that the FE will inform the engine of the click, and the engine will reply with the 2d coords of the click and the frontend will use that to modify the code.
In this case we want to transform
As stated earlier we have the source range of this variable declaration in our app state so we know what we want to modify, we also have the program memory from the last execution. Since line is a relative line, we want add to the pipe expression line([x,y], %) where xy is where the user-just clicked minus the position of the end of the sketch, which would be the last element in the valueproperty in thesketchGroupor in this case where it's an empty array we look at thestart` property instead
We can first create the new line call (doing the math to figure out what the relative xy should be)
We can then use our source range to get the node that we're modifying using helper functions something like const PipeExpressionToModify = getNodeFromRange(ast, sourceRange, 'PipeExpression')
It can be modified like so:
That is adding a new expression to the pipExpressionBody.
Once we recast and re-parse, the FE will update its source range of what's been edited to the whole pipeExpression
The same process can repeated for each segment the user adds.
Notice that the lines have arrows, indicating their heads, While it has no impact on the model geometry-wise, it is important from a mental model perspective.
Next constraints as that's important context for editing the sketch segments.
Because it will help demonstrate something, we'll have our user exit the sketch and re-enter edit mode. They click to exit
We remove some of the extra bloat like the line heads and the user's cursor gets put down the bottom of the file on a blank line
When the user's cursor ends up inside of the pipeExpression we offer an editSketch action.
The user's cursor may have gotten put on the last expression in the pipe because they put it there (which would in turn highlight the segment in the scene), or because they clicked/selected that segment in the 3d scene, and that put the cursor on that line, it makes no difference.
The reason we can offer this button is that we have a mapping of IDs to sourceRanges with metadata about the engine calls, if the engine call for these source ranges is related to segments or sketch stuff and is in a pipeExpression we can offer them to Edit the sketch. Clicking the editSketch button will just put the engine back into edit mode that were were in before when we were adding segments.
While it might make sense to segway to editing sketches, (that is dragging control points to update the code and the sketch), we have to make a detour past constraints first.
What's nice about constraints is they are fairly pure ast-mods, they only need help from the engine for selections, In the case of selecting segments, the engine reports segment ids when they are clicked, and the FE puts the user's cursor/s in the correct place in the code
Of course, the engine isn't 100% needed for this, you can add multiple cursors in the editor by holding down cmd,
Select axis
Where the engine is needed is when the user does something like select an axis, as these don't have representation in code, so this selection we keep in app-state instead of being nothing more than cursors in the editor. The below clip is an example of a selection of a segment and an axis.
In the modeling app, a segment function call is considered constrained if it does not contain literals for its values. That is line([2.42, 3.41], %) is not constrained, where as line([myVar1, myVar2], %) is constrained. The reason why this rubric works well is because when the user first adds segments they will have literals, which means that we can easily edit these values when the user drags that segment's control points.
it's obvious why we wouldn't update the values of line([myVar1, myVar2], %), because we'd either have to remove the reference to the variables and turn them back into literals, or we update the value of the variable declaration, which could cause other problems since it's likely the variable is used in multiple places other than the function call we're currently focused on.
If line([2.42, 3.41], %) isn't constrained and line([myVar1, myVar2], %) is, then line([2.42, myVar], %) is partially contained, i.e. it's y-value is contained in this case, but we could still drag the control point to left and right.
Sorry, I'm sneaking into editing-segments territory, It's worth glimpsing at how they're related though.
While line([myVar1, myVar2], %) is a good example of a constrained segment, it's worth noting it does not have to be an identifier (a variable reference), so long as it's not a literal so line([1 + 3, someFn()], %) would count too (binaryExpression, and callExpression respectively). One of the most common ways you'll see constraints in the modeling-app is something like |> angledLine([45, segLen('seg01', %)], %) that is the angled line which takes angle-length values is constraining its length to a previous segment in the sketch that is tagged with seg01. This couples the length of both these lines in a way that is intuitive to read, and easy from a code-mod perspective. The main limitation is that it's limited to backward references only.
At this point, it should have been abundantly clear that constraints do not work in the same way as most CAD-packages, as 2d solvers are the norm there. The summary of why we're not going down that route is because
The code would be much uglier, harder to read, more verbose
It doesn't suit a code-first approach, as constraints are a declarative part of how the sketch is defined, instead of a series of statements that are solved at run time which can fail (over or under-constrained, not known until it is executed)
Also doesn't suit a code-first approach because it is inherently less expressive and relies on heuristics, for example, it is common to treat an angle like 30° as being either 30° or 210°, that is +- 180°, and have heuristic determine which one it should be. These heuristics mean that the line might snap to the opposite direction when part of the sketch is dragged far enough. We could bake in some of our own heuristics, but I think that would be antithesis to the exacting nature of a code-driven model.
If you're a sucker for punishment and you want to read more about this, feel free to read this issue
Let's look at some constraints in detail, The simplest is probably the vertical or horizontal constraints, let's revisit a horizontal constraint example
Three things happened there, 1) segment/s were selected, 2) from the selection the button became enabled 3) clicking the constraint modified the code.
Lets break down what happens internally.
Each constraint gets the current selections (an array of source ranges, and maybe some extra stuff like axis selected), this way they can all check if the ast-mod is possible.
We have helper functions that can take a source range and return the callExpression node for where the cursor is, if it can't find a callExpression for the cursor position then no constraints are possible.
If it does get a callExpression then we can evaluate how constrained it is.
As we've already covered, if it's something like line([1,2], %) or angledLine([45, 3], %) then it's not constrained at all, in the code we consider this to be "free", and it means any constraint transform can be applied.
The opposite of this is line([myX, myY], %) or angledLine([myAng, myLen], %), these are fully constrained, and no transforms are possible so this is disregarded outright.
There are a lot more subtleties though with partial constrain function calls like line([2, myVar], %) or angledLine([myAng, 3], %) because they each have an unconstrained value, they can be transformed, but in a limited way, for example line([2, myVar], %) could have the vertical constraint applied since that wouldn't interfere with the myVar value and would become yLine(myVar, %) likewise it could also have an angle constrain applied to it as it would get transformed to angledLineOfYLength([myAng, myVar], %) since again myVar is undisturbed. However it could not be given a vertical constraint, as the vertical constraint transforms things to xLine(someXVar, %) and we'd have to nuke the y value: myVar.
It's also worth noting that we're not always transforming from a line(...) call to something else, the starting function call might be any of the following lineTo, line, angledLine, angledLineOfXLength, angledLineOfYLength, angledLineToX, angledLineToY, xLine, yLine, xLineTo, yLineTo
In order to solve this in a scalable way so that we're not writing bespoke queries for each constraint, from each function call to each function call we can categorise partially constrained segments into 6 categories to make it easier to reason about, They are xAbsolute, yAbsolute, xRelative, yRelative, angle, length.
Because all of the function calls take one or two values and those values are either an x or y value that can be relative to the last segment or absolute to the sketch plane, or they can be a length or an angle. Let's look how we would categorise some examples
line([myVar, 3], %) goes in xRelative category because it's the variable in the x param, and because line is a relative function call
lineTo([myVar, 3], %) goes in xAbsolute category, same as before but lineTo is absolute
line([3, myVar], %) goes in yRelative for same reason as first example, just y instead of x
angledLineOfXLength([45, myVar], %) also is xRelative
angledLineOfXLength([myAng, 3], %) is in the angle category
angledLine([45, myVar], %) is in the length category
xLine(3, %) is in the xRelative category
etc etc
The last thing that's not been covered is the different types of constraints that can be applied, to give a sample of some:
vertical and horizontal, self explanatory
equalLength the second segment selection references the first segment selection, locking its length to be equal to the first
parallel is similar to equal length, just for angle, the only difference is the constraint can add 180° so that the second segment is within the closest 180° of its original direction.
angleBetween is similar to parallel, but it will reference the first segments angle but then + someAng where so as to set the angle between them to someAng
With these three bits of information, the function name of the callExpression we're looking to transform, the category of constraint it's in, and other types of constraints available we can make a mapping of these.
Where TransformInfo is some data and functions to use to perform the ast-mod, but what we're concerned with now is querying if a given ast-mod is possible given a certain selection and we can do that with this mapping. That is for a given source range we can get the callExpression and therefore the function's name, we also have utils that will look at the arguments in the function and give back a category, from there we can loop over each constraint type and try and access the TransformInfo, if it's there, then the constraint can be applied, if not it can't
for(constconstraintofconstraints){consttransformInfo=transformMap?.[fnName]?.[category]?.[constraint]if(transformInfo){console.log(`${constraint} IS possible`)}else{console.log(`${constraint} NOT possible`)}}
That's how we determine which constraints can be applied, but it also handles how the ast-mods of the constraints are fired too in that we use what's in the (waves hands) transformInfo. To make this more concrete and to dig into transformInfo well look at the horizontal constraint I said we were looking at several paragraphs ago. I had said three things happened there, 1) segment/s were selected, 2) from the selection the button became enabled 3) clicking the constraint modified the code.
is the user clicking on segments in the scene
For the horizontal constraint it is the process I explained above but specifically for horizontal: transformMap?.[fnName]?.[category]?.['horizontal'], but what I skimmed over is what happens with multiple cursors, and that depends on the type of constraint. Horizontal constraints are simple because they do not need to reference previous lines, it's changing the function calls where the cursor is at, so in the case of multiple cursors, we check if the ast-mod can be applied to every cursor position in order for us to enable the constraint
We fire the ast-mods. transformInfos is an object with { tooltip: 'angledLine'/*the name of the function it will be transform into*/, createNode: () => {/* function that creates the new ast node*/}} When running with multiple cursors we'll have an array of transformInfos and so we just need to loop over them giving each of them the updated ast from the last. There are a few details I'm skimming over in terms of the information these functions expect, but that's it in a nutshell.
The process above changes slightly for a constraint like parallel, as it will always need two cursors, but the first cursor is there to determine which segment is going to be referenced by the second, in which case the only mod that's needed is to add a tag to the segment function call, and then it's the second cursor that will have to go through the same process as above.
How does this effect editing segments?
To recap, the constraints between segments are not explicitly stated in the form `abc-segment is parallel to def-segment`, instead it is implicit in the use of code, that is
The second segment must be parallel because it uses the angle of the first segment to set its angle. This means the relationships/constraints between segments has to come from executing the code itself, as it has to be treated as the source of truth.
While it would be possible to write a query that looks at angledLine([segAng('seg01', %), 2], %) realises that because it uses segAng to set its angle, that means it has a parallel constraint and furthermore we could look through the ast for the segment with the seg01 tag. However what it doesn't cover is all of the weird and wacky relationships that users can add between segments, best to show with a vid:
Where this becomes a major problem is when it comes to animating the sketches while control points are being dragged. There is no way to effectively communicate the relationship between all of the segments when users are able to define those relationships however they want with code. Therefore how they should be animated is also not possible without keeping the code in the loop. There are two options available
Because it is easy to both query for and animate segments without references to previous lines, we can inform the engine to animates these when control points are being drag, the rest of the sketch can be grey out until draging stops and re-execution occurs. For example in the case of this 5 segment sketch
because it's the 4th (angledLine) segment that references a previous segment if any of the first 3 segments' control points are dragged, we can animate the segments up til the 3th segment and grey out the rest. This is a good short-term compromise.
The other option is to have accurate animations for the whole thing, this can be achieved if we do re-execution of just a small section of code (says sketch pipeExpression) letting it use the last execution's programMemory for any variables it needs access to. This way it will be preformant enough to run many times per second. But with will also have to run on the server.
Final constraints thoughts
Annotations
We should think about what annotations we can add to segments that help communicate how they are constrained. An easy one to begin with is simple colours for the three main constrained levels. The clip below shows a segment starting as yellow as unconstrained, then turning read for partially constrained, and finally green for fully constrained. There's no reasoning behind the colours themselves, just proves the concept.
Screenshare.-.2023-09-22.8_35_26.PM.mp4
Constraint best-practices
I briefly mentioned because constraints in the KittyCAD are different to other CAD-packages that there are limitations, the biggest one is that it can really only backward references, this hugely reduces complexity but it does require some best practices for how to go about constraining sketches. The most robust process to follow is to
First constrain the angles of all your segments in the order in which they were added, which means going around the profile loop the same way you added the segments.
Then constrain all of the segment's dimensions, again in the order they were added.
Going around the profile loop in order makes sure you never find yourself in a situation where you find yourself needing to add a forward reference, and it also explains why it's important that each segment has an arrow head, as it helps the communicate the mental model of the profile loop having a direction.
To drive this home let's say we're trying to create a sketch for this part, and have it all constrained (excuse the poor drawing)
Here's a sped up clip of the constraints being added following the best-practices above, notice at the end we have the whole sketch's dimensions define by a handful of variables, which of course means we can adjust the sketch easily with it keeping it's form by adjusting these variables.
This discussion was converted from issue #391 on September 17, 2023 21:37.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My aim is for this to be a comprehensive explanation of how the app works, i.e. how we make code-as-source-of-truth a reality.
Here are the most important parts in a nutshell. The way to read it is with the exception of the blue rectangles, the rectangles represent data, whereas the ovals manipulate that data. (the engine isn't included here as the engineConnectionManager is there to abstract that)
AST🤖/Code🧑💻 pair
The sourceCode and the AST together is the source-of-truth for the app, and so changing from one type to another has to be ironclad. That is taking any given AST, casting to code, then reparsing to an AST should give you the same AST, and like wise for code->AST->code. (the one exception here is we only expect the AST to have accurate source ranges when it's been derived from code).
The one sentence explanation of what an AST even is would be "it's a nested data structure that represents the code", but really the best way to get a feel for it if you not familiar with the concept is to:
play around with our AST explorer (sidequest)
Screenshare.-.2023-09-21.3_59_33.PM.mp4
Besides being a standard part of practically all modern programming languages, it's especially useful to us because changing code programmatically wouldn't be scaleable with simple string manipulation. A good way to think about how these two work in tandem is that when a human wants to edit the code, they do so directly by typing in our text editor, when we want to edit the code using our app logic, we do so by updating the AST, and recast to code for the human. That is the code is the source-of-truth for the human, and the AST is the source of truth for robots.
The executor and the stdlib
The executor is ofc what runs the code, it does so by traversing through the AST creating programMemory is it goes (technical name is a tree-walk interpreter)
So in a simple case of
It creates programMemory like
The EngineConnectionManager, (mapping between engine-calls and sourceRanges)
But the way this actually produces 3d geometry is through our stblib and a websocket connection to the KittyCAD engine
Each one of our stdlib functions that needs to send a command to the engine, does so vias the engineConnectionManager, it passes along an id, and should receive a result from the engine. For the most part the result from the engine is not heavily used. For example in the case of a
line(...)
function call, we only want to know that the creation of that line was successful, though later we'll have stdlib functions that query existing geometry (e.g.getExtrudeVolume
) where the result will be important.But the engineConnectionManager does much more than this
It maintains a mapping between information important to the UI and information important to the engine. That mostly means associating the cmd_id with metadata related to execution and the AST, in particular the mapping between cmd_id and sourceRange is very important for a number of reasons, first of which when putting your cursor on a line of code that was responsible for creating something in the scene, we want to highlight that thing in the scene. In the video while faint you can see the line segment change as the cursor moves, this includes multiple cursors highlighting multiple segments.
Screenshare.-.2023-09-21.7_52_51.PM.mp4
Introducing the UI to our mental model in blue
This segment highlighting on cursor position is made possible by the mapping that the engineConnectionManager (the arrow going to the streamedUI is dotted as the engineConnectionManager communicates which ids to highlight and it will appear that way in the stream)
This also works the other way where hovering over a segment will highlight the code responsible, and clicking will select the line by putting the cursor in that part of the code (yes, selections in the app are mostly just cursor positions), this works with multiple selections with multiple cursors by holding the shift key (note this example with segments currently only works in edit mode).
Screenshare.-.2023-09-21.8_37_49.PM.mp4
This works as the app sends back the user's mouse pixel-coordinates, and other mouse events like clicks, and in turn, the engine responds with the ids of any scene entities that have either been hovered or clicked on, and the engineConnectionManager is able to resolve these back to source ranges
What we've gone over so far is basically how go from code to execution, and how after execution we're able to associate artifacts in the 3d-scene with specific sections of code. What we haven't talk about is how the code is generated from UI-interactions. The above has been important foundation selecting things in the scene and having that resolve no sourceRanges is needed because sourceRanges is what we use as hooks surgically to query and modify the AST.
These two steps:
Query: The ast to see if the mode is possible (this includes cursor-position/sourceRange and other app state like being in sketchmode for example),
Modify: Pulling the trigger on ast-mods if the query step is okay.
It's best to go through examples for this. A simple one would be when a user wants to add a horizontal constraint to a segment. In most cases this means taking an expression like
|> line([2, 3], %)
and changing it to|> xLine(2, %)
, the process is:|> yLine(3, %)
as its already constrained or if the code is entirely unrelatedconst myVar = 5
Screenshare.-.2023-09-22.11_56_56.AM.mp4
With that appetizer example out of the way.
Example workflow
Note these examples don't follow 1:1 with the workflow currently available in the app.
A User opens the app with an empty scene, they can rotate, pan, zoom.
The go to start a sketch. Once they do that we show them the default planes, There is no code generated at this point, the FE will make engine calls to create these planes and hang onto their ids in app memory. A user selects one of the planes, at which point we can generate some code, for readability
startSketchOnPlane
should takexy, yz, xz
but we'll accommodate arbitrary planes too.Adding the profile origin
The user is now in sketchMode for that plane, and can select a tool, they select the line segment tool. The user now clicks on the sketch plane with the tool, The FE sends an event to the engine about where in the stream the user clicked (xy-pixels) and in turn, the engine responds with what happened (xy of the click on the sketch plane).
The ast for an empty editor looks like this
that is an empty body, no expressions yet, so the last-mod for starting a sketch is going to be to add an expression to the end of the body. The code we want to generate is
const part01 = startSketchOnPlane([x, y]) |> startProfileAt([x2,y2], %)
that is a variable declaration
const part01
that is initialised with a callExpressionstartSketchAt(...)
that has one argument that's an array expression with the xy coords[1, 1]
. I'll pop in the ast for this expression that needs to be added to the programs body property, but because it's verbose I won't keep doing it throughout this issueWe don't have to deal with the AST like this directly, because we have helper functions so it looks more like:
but really
[createLiteral(x), createLiteral(y)]
with xy from the engine event.Once we've added this expression to the ast, we then recast ast back to code so that it shows up in the editor in human-readable form, The generated code is then converted back to the ast and we execute from the ast.
The reason why goes modifiedAst -> code -> ast is because each node has
start
end
source range properties, and exactly how many characters will get added is only known by the recaster, we need to do this loop to get accurate start-end values.Another thing to note is that when we started a new sketch, there was no source-range to use as a hook to know what to modify in the AST, instead we were adding a new expression to the end of the body. Now that we have some code, we'll keep the source range of this variable declaration in our app-state as the thing we're currently editing.
To recap, what's happened here?: The user has done a couple of things related to the 3d scene, by having the FE send back these click events etc, the engine can respond with events that are related to the 3d-scene of which the FE doesn't have context on. These events are then used to produce code, In this case because the engine and the app are working in lockstep we shouldn't need to re-execute to produce the 3d scene artifacts. But for the sake of explanation let's look at execution each time.
With execution we produce artifacts that exist in the 3d-scene by executing the generated code. In this case the CallExpression
startProfileAt
being part of the stblib makes an engine-call to create this start point so it can be displayed on the screen (let's go with this 3 line crosshair thing so as not to confuse it with plane origin)Now that code has caused something to appear in the 3d scene, we should recall that the way that the engine and the app keep their own representations of the entities created by code execution is with ids, where the engineConnectionManager keeps a mapping of those ids to other code metadata.
Let's look at the result of the execution of our code so far, that is the program memory. What
startProfileAt
creates is a sketchGroup, ostensibly this is an array of values for each segment but also has a start property for indicating where the sketch starts, for our sample code the program memory looks likeIn the above the
value
is an empty array but we do have the start property populated. This start value has an id, this is the id that it sent to the engine when creating the sketch profile origin. And again Having this id is how we power hover and clicks in the 3d scene to source ranges in the editor.Animating draft segments and adding segments
Back to where we left off in terms of UX the user had selected the sketch start point, but they've still got the
line segment
tool equipped, so the engine will continue to animate draft segments (dash line)Then when the user clicks it will follow the same pattern that the FE will inform the engine of the click, and the engine will reply with the 2d coords of the click and the frontend will use that to modify the code.
In this case we want to transform
into
As stated earlier we have the source range of this variable declaration in our app state so we know what we want to modify, we also have the program memory from the last execution. Since
line
is a relative line, we want add to the pipe expressionline([x,y], %) where xy is where the user-just clicked minus the position of the end of the sketch, which would be the last element in the
valueproperty in the
sketchGroupor in this case where it's an empty array we look at the
start` property insteadWe can first create the new line call (doing the math to figure out what the relative xy should be)
We can then use our source range to get the node that we're modifying using helper functions something like
const PipeExpressionToModify = getNodeFromRange(ast, sourceRange, 'PipeExpression')
It can be modified like so:
That is adding a new expression to the pipExpressionBody.
Once we recast and re-parse, the FE will update its source range of what's been edited to the whole pipeExpression
The same process can repeated for each segment the user adds.
Notice that the lines have arrows, indicating their heads, While it has no impact on the model geometry-wise, it is important from a mental model perspective.
Next constraints as that's important context for editing the sketch segments.
Because it will help demonstrate something, we'll have our user exit the sketch and re-enter edit mode. They click to exit
We remove some of the extra bloat like the line heads and the user's cursor gets put down the bottom of the file on a blank line
When the user's cursor ends up inside of the pipeExpression we offer an editSketch action.
The user's cursor may have gotten put on the last expression in the pipe because they put it there (which would in turn highlight the segment in the scene), or because they clicked/selected that segment in the 3d scene, and that put the cursor on that line, it makes no difference.
The reason we can offer this button is that we have a mapping of IDs to sourceRanges with metadata about the engine calls, if the engine call for these source ranges is related to segments or sketch stuff and is in a pipeExpression we can offer them to Edit the sketch. Clicking the editSketch button will just put the engine back into edit mode that were were in before when we were adding segments.
While it might make sense to segway to editing sketches, (that is dragging control points to update the code and the sketch), we have to make a detour past constraints first.
What's nice about constraints is they are fairly pure ast-mods, they only need help from the engine for selections, In the case of selecting segments, the engine reports segment ids when they are clicked, and the FE puts the user's cursor/s in the correct place in the code
267209312-31774921-6a67-4493-b6ab-f6929aa1ea7a.mp4
Of course, the engine isn't 100% needed for this, you can add multiple cursors in the editor by holding down cmd,
Select axis
Where the engine is needed is when the user does something like select an axis, as these don't have representation in code, so this selection we keep in app-state instead of being nothing more than cursors in the editor. The below clip is an example of a selection of a segment and an axis.
267210220-9f5ddb93-765e-4d87-9294-cf56cb6ba46c.mp4
What even are constraints?
In the modeling app, a segment function call is considered constrained if it does not contain
literals
for its values. That isline([2.42, 3.41], %)
is not constrained, where asline([myVar1, myVar2], %)
is constrained. The reason why this rubric works well is because when the user first adds segments they will have literals, which means that we can easily edit these values when the user drags that segment's control points.it's obvious why we wouldn't update the values of
line([myVar1, myVar2], %)
, because we'd either have to remove the reference to the variables and turn them back into literals, or we update the value of the variable declaration, which could cause other problems since it's likely the variable is used in multiple places other than the function call we're currently focused on.If
line([2.42, 3.41], %)
isn't constrained andline([myVar1, myVar2], %)
is, thenline([2.42, myVar], %)
is partially contained, i.e. it's y-value is contained in this case, but we could still drag the control point to left and right.267220201-5d61a845-ee71-47d7-99a5-8334ba50ec92.mp4
Sorry, I'm sneaking into editing-segments territory, It's worth glimpsing at how they're related though.
While
line([myVar1, myVar2], %)
is a good example of a constrained segment, it's worth noting it does not have to be an identifier (a variable reference), so long as it's not a literal soline([1 + 3, someFn()], %)
would count too (binaryExpression, and callExpression respectively). One of the most common ways you'll see constraints in the modeling-app is something like|> angledLine([45, segLen('seg01', %)], %)
that is the angled line which takes angle-length values is constraining its length to a previous segment in the sketch that is tagged withseg01
. This couples the length of both these lines in a way that is intuitive to read, and easy from a code-mod perspective. The main limitation is that it's limited to backward references only.At this point, it should have been abundantly clear that constraints do not work in the same way as most CAD-packages, as 2d solvers are the norm there. The summary of why we're not going down that route is because
If you're a sucker for punishment and you want to read more about this, feel free to read this issue
Let's look at some constraints in detail, The simplest is probably the vertical or horizontal constraints, let's revisit a horizontal constraint example
267227300-0ccd8a2a-ce28-4ead-a2ca-377b77dfd1b0.mp4
Three things happened there, 1) segment/s were selected, 2) from the selection the button became enabled 3) clicking the constraint modified the code.
Lets break down what happens internally.
Each constraint gets the current selections (an array of source ranges, and maybe some extra stuff like axis selected), this way they can all check if the ast-mod is possible.
267229707-38092c63-8e2a-4eae-8a57-9f70a986e9d3.mp4
We have helper functions that can take a source range and return the callExpression node for where the cursor is, if it can't find a callExpression for the cursor position then no constraints are possible.
If it does get a callExpression then we can evaluate how constrained it is.
As we've already covered, if it's something like
line([1,2], %)
orangledLine([45, 3], %)
then it's not constrained at all, in the code we consider this to be"free"
, and it means any constraint transform can be applied.The opposite of this is
line([myX, myY], %)
orangledLine([myAng, myLen], %)
, these are fully constrained, and no transforms are possible so this is disregarded outright.There are a lot more subtleties though with partial constrain function calls like
line([2, myVar], %)
orangledLine([myAng, 3], %)
because they each have an unconstrained value, they can be transformed, but in a limited way, for exampleline([2, myVar], %)
could have the vertical constraint applied since that wouldn't interfere with themyVar
value and would becomeyLine(myVar, %)
likewise it could also have an angle constrain applied to it as it would get transformed toangledLineOfYLength([myAng, myVar], %)
since againmyVar
is undisturbed. However it could not be given a vertical constraint, as the vertical constraint transforms things toxLine(someXVar, %)
and we'd have to nuke the y value:myVar
.It's also worth noting that we're not always transforming from a
line(...)
call to something else, the starting function call might be any of the followinglineTo, line, angledLine, angledLineOfXLength, angledLineOfYLength, angledLineToX, angledLineToY, xLine, yLine, xLineTo, yLineTo
In order to solve this in a scalable way so that we're not writing bespoke queries for each constraint, from each function call to each function call we can categorise partially constrained segments into 6 categories to make it easier to reason about, They are
xAbsolute, yAbsolute, xRelative, yRelative, angle, length
.Because all of the function calls take one or two values and those values are either an x or y value that can be relative to the last segment or absolute to the sketch plane, or they can be a length or an angle. Let's look how we would categorise some examples
line([myVar, 3], %)
goes inxRelative
category because it's the variable in the x param, and becauseline
is a relative function calllineTo([myVar, 3], %)
goes inxAbsolute
category, same as before butlineTo
is absoluteline([3, myVar], %)
goes inyRelative
for same reason as first example, just y instead of xangledLineOfXLength([45, myVar], %)
also isxRelative
angledLineOfXLength([myAng, 3], %)
is in theangle
categoryangledLine([45, myVar], %)
is in thelength
categoryxLine(3, %)
is in thexRelative
categoryetc etc
The last thing that's not been covered is the different types of constraints that can be applied, to give a sample of some:
vertical
andhorizontal
, self explanatoryequalLength
the second segment selection references the first segment selection, locking its length to be equal to the firstparallel
is similar to equal length, just for angle, the only difference is the constraint can add 180° so that the second segment is within the closest 180° of its original direction.angleBetween
is similar to parallel, but it will reference the first segments angle but then+ someAng
where so as to set the angle between them tosomeAng
With these three bits of information, the function name of the callExpression we're looking to transform, the category of constraint it's in, and other types of constraints available we can make a mapping of these.
Where
TransformInfo
is some data and functions to use to perform the ast-mod, but what we're concerned with now is querying if a given ast-mod is possible given a certain selection and we can do that with this mapping. That is for a given source range we can get the callExpression and therefore the function's name, we also have utils that will look at the arguments in the function and give back a category, from there we can loop over each constraint type and try and access theTransformInfo
, if it's there, then the constraint can be applied, if not it can'tThat's how we determine which constraints can be applied, but it also handles how the ast-mods of the constraints are fired too in that we use what's in the (waves hands)
transformInfo
. To make this more concrete and to dig intotransformInfo
well look at the horizontal constraint I said we were looking at several paragraphs ago. I had said three things happened there, 1) segment/s were selected, 2) from the selection the button became enabled 3) clicking the constraint modified the code.horizontal
:transformMap?.[fnName]?.[category]?.['horizontal']
, but what I skimmed over is what happens with multiple cursors, and that depends on the type of constraint. Horizontal constraints are simple because they do not need to reference previous lines, it's changing the function calls where the cursor is at, so in the case of multiple cursors, we check if the ast-mod can be applied to every cursor position in order for us to enable the constrainttransformInfos
is an object with{ tooltip: 'angledLine'/*the name of the function it will be transform into*/, createNode: () => {/* function that creates the new ast node*/}}
When running with multiple cursors we'll have an array oftransformInfos
and so we just need to loop over them giving each of them the updated ast from the last. There are a few details I'm skimming over in terms of the information these functions expect, but that's it in a nutshell.The process above changes slightly for a constraint like parallel, as it will always need two cursors, but the first cursor is there to determine which segment is going to be referenced by the second, in which case the only mod that's needed is to add a tag to the segment function call, and then it's the second cursor that will have to go through the same process as above.
How does this effect editing segments?
To recap, the constraints between segments are not explicitly stated in the form `abc-segment is parallel to def-segment`, instead it is implicit in the use of code, that isThe second segment must be parallel because it uses the angle of the first segment to set its angle. This means the relationships/constraints between segments has to come from executing the code itself, as it has to be treated as the source of truth.
While it would be possible to write a query that looks at
angledLine([segAng('seg01', %), 2], %)
realises that because it usessegAng
to set its angle, that means it has a parallel constraint and furthermore we could look through the ast for the segment with theseg01
tag. However what it doesn't cover is all of the weird and wacky relationships that users can add between segments, best to show with a vid:267305389-e081ccbd-a5a3-48a4-8c9d-bc96b24cb0dc.mp4
Where this becomes a major problem is when it comes to animating the sketches while control points are being dragged. There is no way to effectively communicate the relationship between all of the segments when users are able to define those relationships however they want with code. Therefore how they should be animated is also not possible without keeping the code in the loop. There are two options available
because it's the 4th (angledLine) segment that references a previous segment if any of the first 3 segments' control points are dragged, we can animate the segments up til the 3th segment and grey out the rest. This is a good short-term compromise.
Final constraints thoughts
Annotations
We should think about what annotations we can add to segments that help communicate how they are constrained. An easy one to begin with is simple colours for the three main constrained levels. The clip below shows a segment starting as yellow as unconstrained, then turning read for partially constrained, and finally green for fully constrained. There's no reasoning behind the colours themselves, just proves the concept.
Screenshare.-.2023-09-22.8_35_26.PM.mp4
Constraint best-practices
I briefly mentioned because constraints in the KittyCAD are different to other CAD-packages that there are limitations, the biggest one is that it can really only backward references, this hugely reduces complexity but it does require some best practices for how to go about constraining sketches. The most robust process to follow is to
Going around the profile loop in order makes sure you never find yourself in a situation where you find yourself needing to add a forward reference, and it also explains why it's important that each segment has an arrow head, as it helps the communicate the mental model of the profile loop having a direction.
To drive this home let's say we're trying to create a sketch for this part, and have it all constrained (excuse the poor drawing)
Here's a sped up clip of the constraints being added following the best-practices above, notice at the end we have the whole sketch's dimensions define by a handful of variables, which of course means we can adjust the sketch easily with it keeping it's form by adjusting these variables.
contraints-faster.mp4
The code at the end of the video is:
Beta Was this translation helpful? Give feedback.
All reactions