Jump to content

Miguel Caamano

Members
  • Posts

    13
  • Joined

  • Last visited

  • Days Won

    1

Miguel Caamano last won the day on April 16 2020

Miguel Caamano had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Miguel Caamano's Achievements

Newbie

Newbie (1/14)

2

Reputation

  1. Absolutely. Please find it attached. Thanks Miguel. 2020-04-15 09-50-26.mkv
  2. Hi Cristobal. My Bad completely, I got two things mixed up in this post. Please disregard it completely. Will elaborate in a new one. Apologies. Miguel.
  3. Hi!, I'd like to propose a future development that i think i would improve speed and usability on the workflows graph. Like in Unreal engine node editor, i'd like to suggest the creation of a contextual menu, pretty much like the one that exists now when using the right clic on the grpah, but limited to "Tasks and outputs" nodes only. The way it would work is the following: On the output of any node, the user clic+drag to create the line, but when releasing it on a blank area, instead of "loosing" the connection, a dialog box appears showing all the possible/compatible nodes that can be connected to the one we just drag the line from. I hope that this GIF helps to clarify my point: I believe that this feature can help a big deal to increase the speed when building the graph. Thanks a lot Miguel.
  4. Hi, I'd like to elaborate on three notes, about the workflows nodes, that have drawn my attention in the last few days. First: There is no way to "disable" nodes in the script. I know that must not seem prioritary, but when an extensive script populated with many nodes becomes buggy, disabling nodes for testing is the way to go. All node-based systems offer this possibility to the user. Second: It seems that the nodes don't rearrange well when a intermediate node gets deleted or cut. The default behaviour is to break the stream erasing the connection between the incoming and the dependant nodes on the graph, when, by default, the majority of nodes systems do connect dependencies with the dependant nodes. Nodes set with a bifurcation: When the copy node gets erased: <- Wrong behaviour <- Right behaviour workflows should exhibit This behaviour happens not only in bifurcations but also on single stream examples. This "feature" can be specially painful when a node has many dependencies, like a multiple-format conversion. Third: In order to mitigate the second point of this post, and also a feature I'm missing a lot, It'd be advisable to create a "dot" node. The majority of node systems provide with this special node type that allows to connect one node to many others in a simple way. And also serves 2 purposes: 1) allows to keep the graph tidy and easy to read 2) in case one node connected to many has to be replaced, it gets way easier. Some Examples: Unreal Engine: Natron/Nuke: Blender: The "dot/reroute" node provides a lot of flexibility to the graph, speeds up the operation a great deal, and helps to make the graph way easier to read. Thanks Miguel.
  5. Hello, Not sure if it is my version, but it seems that workflows does not have the "undo" option. This is a bit of a problem when the script becomes populated with nodes and something does not go right, or the operator has a "way too fast" set of fingers. Thanks miguel.
  6. Hello Cristobal. Thanks a lot for the reply. let me know if anything is unclear or needs further explanation. Thanks. Miguel.
  7. Hello, I believe that it is necessary to include a list of recent workflows scripts used in the "File" menu. Mainly for the sake of simplicity+Speed of operation. Thanks Miguel.
  8. Hello, After tinkering a bit with workflows, there is a feature i believe it is necessary: "on the fly" option for tasks that involve any kind of transcoding. At the moment, when trying to convert to a file a new location from "X" to "Y" format we are forced to choose between the following 2 ways: Case 01) - read from storage 01 -> convert on storage 01 -> move to storage 02 Case 02) - read from storage 01 -> copy from storage 01 to 02 -> convert on storage 02 -> delete copy on storage 02 It might seem pretty simple, but both scenarios create their own issues that could very easily solved with a "on the fly" option: Case 01: Seems pretty straightforward, but there is a problem: The "original file to format Y " conversion creates traffic both ways (reading and writing) to the storage. And then a third time to move the conversion out of its original place. This is x3 amount of traffic on the network. In a 10gbE that might not be a problem, but on a WAN or even a gigaBit ethernet, that can be a show stopper. Also, moving is a kind of "deleting" operation, that might not be "wise" on certain types of environments. Case 02: I believe this one will be the most usual, but still shows issues: The original file gets copied to the second storage location, and then read again to be transcoded. Later, the copy of the original file gets deleted. This creates, again, a x3 the amount of needed traffic on the network and temporarily x2 on the storage space. In both scenarios the process of conversion is going to work, but the stress the nertwork suffers is worthless. In a few hundred megabytes file this is alright, no harm. But going to large sizes and large quantities of files, this becomes slow-to-almost-unusable. Also, we may think that network & storage resources are always scarce. The solution, in my opinion, would be to create a "on the fly" option, that uses the system internal memory, to handle the conversion without relying on the storage as such. Broadly speaking, this would force the workflows to: read the original file -> store it, or a portion of it, on its memory for transconding -> transcode -> copy/render it to the destination location. I believe that this should be an option for all the nodes involved in converting/transcoding/transforming any material. And that should be exposed on the face of the node, when the option is activated Thanks a lot. Miguel.
  9. Hi there! Silly question. It seems that there is no support for MKV files, at least in windows. Is that right? Thanks!
  10. With backdrops: With the addition of backdrops we can clearly see in a glance how we are downloading material from a client FTP, checksuming it, copying elsewhere, and then creating 3 different copias: EXR, MOV & thumbnail. Any issue that arises when using this script can be easily located in the script and the operator has not to play the guessing game. In general i think that by including backdrops we make easier and faster Workflows. We give it the ability to deal with larger constructions and more complex scenarios while making it easier to use for the operator. THanks Miguel.
  11. For clarity sake, Workflows should include functional backdrops on its graph. Backdrops not only are useful to group nodes, but also to make easier to read/rearrange/debug any node-graph. The difference is pretty evident on any graph: Without Backdrops:
  12. Hi, It'd be pretty interesting to include a simple node that modifies the resolution on the metadata thru the stream so we can setup "proxy" levels. This way, it would make visually easier, for the workflows operator, to identify/build structures on the graph in cases like the following: The graph above shows the following scenario: If we wanted to use workflows a a system to receive renders from a department, and create all the proper full/low-res exr/movs versions for the studio to use,, at the moment, the output resolution has to be set in each of the "output" nodes, which makes the graph difficult/slow/obscure to read & modify. In case we had a longer graph built, where we include burnings to the low res, but not the thumb nor the full, and metadata injection only to the full, etc etc... having this proxy nodes at the beginning of each branch would help in clarity, ease of reading and debug of the graph. This is a pretty simplistic example, but f think it show clearly the potential of this mini nodes and their modification of the metedata. Cheers. Miguel
  13. It would be a good addition to the software a "node/extra input" that interrupts the flow of an stream until a signal is given to it. The following example paints it clearly: On the image I'm setting up an scenario were a piece of data has to be converted to a .tif format and then copied to another storage elsewhere. After the copy is successful, the .tif conversion requires to be deleted. According to the Overview, the thrash node will be executed before the copy one has a chance to be performed. This can very easily become a hazard and a serious threat to the integrity of the data/storage. A very quick way to solve this would be to provide an extra input to the "Trash" node. An input that halts the node until the signal is provided thru it. Like the following example: Most of the Node systems provide every node with an array of inputs that will modify/condition its behaviour. We can think of this "Wait for" node as the "Mask" input of a Merge node in a compositing software, or a secondary input like a "target" in a logic blueprint. Another option Would be to create a generic "HALT" node that would interrupt the execution of the stream/branch until a signal over a second input is satisfied. I think this is a mild feature request that could benefit the entire software and its usability. Thanks Miguel
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.