Replies: 2 comments
-
There was a mailing list post "Possible road map for GUI debugger" that talked about a time traveling approach that works off of improving the trace output.
Yeah recording the stream for testing, nice.
Yes, this is the approach regardless of playback or not.
I think all you describe here is sane. Definitely leaner by avoiding the server components of DAP. Do we lose anything by doing playback rather than breaking execution? I dont know offhand, maybe, but are the things we lose worth what the server implementation costs? I have some thoughts about about how large this file is, how long it takes to load. Like if you are pushing a 1 GB file through Daffodil, how large is the trace file it produces? We can add configuration to the trace to help, so probably not an issue. I cant think of anything offhand to disagree with. |
Beta Was this translation helpful? Give feedback.
-
A couple more thoughts
Taking this approach could help the IDE be developed in parallel to a server component. The IO streams could be replaced with connections to the server once its ready, but until then the IDE plugin could pull from a file like you describe. Might get somewhat more complicated that that but it seems like it could work out well as a development tool and test harness. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
@regetom and I were discussing lean methods to connect Daffodil to a DAP-compliant debugger like VS Code, and would like to propose a two-stage approach:
This architecture would support a (very) minimal debugging experience: at every step,
Re (1), The current trace debugger is "merely" an instance of the interactive debugger with a static set of commands, which then outputs a subset of the current state to
stdout
using an ad-hoc rendering format. Instead, a new trace debugger could output the captured state in a structured way, such as JSON, to be read by downstream programs. A minimal set of information can be serialized to support the desired debugging tasks, and this set can be grown as needed. We also imagine a serialized trace--in a known and more structured format!--would be useful for other scenarios outside of debugging, like testing.Re (2), to create a DAP-compliant client, we would load a trace file and infer DAP-related state from it:
(I'm still learning what concepts DAP requires.)
Relating to the existing code in this repository, the notion of an "event stream" (
type EStream = ZStream[Any, Nothing, Event]
) can be reused: the tracing debugger produces an event stream, which then happens to be serialized to a file, or wherever. The "trace consumer" DAP process would consume such a stream into memory, and maintain the current debugging state and allow "movement" within the trace history via received DAP commands. The current set ofCommand
subtypes would be replaced/integrated with DAP-related types (using [scala-debug-adapter](https://github.com/scalacenter/scala-debug-adapter, etc.).What do you think about this architectural split, and about the structure of such a DAP-compliant debugger?
Some open questions:
Beta Was this translation helpful? Give feedback.
All reactions