Hey all,
just a quick question about job dismantling and XML datasets:
following situation:
there's a job folder with 2 PDFs and 1 XML
/4711
/cover.pdf
/bookblock.pdf
/4711.xml
i pulling the folder 4711 and reading the content of the XML with pickup XML (metadata in job folder asset).
When i'm dismantling the job afterward, is the dataset 'multiplied' for each file of the job or is the dataset shared between all copies of the job? So if i add elements into the dataset of cover.pdf will the dataset of bookblock.pdf also be updated?
Cheers,
Thorsten
XML pickup dismantle job
-
- Newbie
- Posts: 17
- Joined: Mon Dec 13, 2010 4:52 pm
XML pickup dismantle job
Thorsten,
after you dismantle the folder, every resulting job gets its own dataset. Changes in the dataset of a job is not reflected in the datasets of other jobs that where generated when dismantling the folder.
Regards,
Peter
after you dismantle the folder, every resulting job gets its own dataset. Changes in the dataset of a job is not reflected in the datasets of other jobs that where generated when dismantling the folder.
Regards,
Peter
XML pickup dismantle job
Hi Thorsten,
maybe you can use "Ungroup", process your cover, then use "Assemble" with "Merge metadata" to yes for passing your new dataset information from your cover to your bookblock.pdf?
maybe you can use "Ungroup", process your cover, then use "Assemble" with "Merge metadata" to yes for passing your new dataset information from your cover to your bookblock.pdf?
XML pickup dismantle job
thanks for the suggestion, but that didn't work either...
i want to add status information into the xml for each item i'm processing, the relevant part of the xml looks something like this (simplified):
<order>
<print-items>
<print-item id=1>
<print-item-type>book</print-item-type
<subitem id=1>
<subitem-type>bookblock</subitem-type>
<filename>bookblock.pdf</filename>
</subitem>
<subitem id=2>
<subitem-type>cover</subitem-type>
<filename>cover.pdf</filename>
</subitem>
</print-item>
</print-items>
</order>
so while processing the cover or the bookblock i want to add status elements to the node (not the respective node!):
<print-item id=1>
<print-item-type>book</print-item-type
<status code="processing" time="13.11.12 22:46:32">
<status code="processing" time="13.11.12 22:46:38">
<subitem id=1>
i am totally aware of the fact that this would result in 2 elements under , but this would give us visibility on when the book was processed
merge metadata ended with on of the two status elements, probably simply overwriting one another
i want to add status information into the xml for each item i'm processing, the relevant part of the xml looks something like this (simplified):
<order>
<print-items>
<print-item id=1>
<print-item-type>book</print-item-type
<subitem id=1>
<subitem-type>bookblock</subitem-type>
<filename>bookblock.pdf</filename>
</subitem>
<subitem id=2>
<subitem-type>cover</subitem-type>
<filename>cover.pdf</filename>
</subitem>
</print-item>
</print-items>
</order>
so while processing the cover or the bookblock i want to add status elements to the node (not the respective node!):
<print-item id=1>
<print-item-type>book</print-item-type
<status code="processing" time="13.11.12 22:46:32">
<status code="processing" time="13.11.12 22:46:38">
<subitem id=1>
i am totally aware of the fact that this would result in 2 elements under , but this would give us visibility on when the book was processed
merge metadata ended with on of the two status elements, probably simply overwriting one another

XML pickup dismantle job
@Thorsten: you have not specified how you update the dataset, but I am assuming you are using a script.
Here is a suggestion: do not update the dataset of each dismantled job straight away, but store the information in global data. After having assembled the jobs you update the dataset with what you stored in global data.
Read the section on maintaining global data in the help on the Environment class. As the Switch class inherits from the Environment class all methods for that class are available in the s variable.
Here is a suggestion: do not update the dataset of each dismantled job straight away, but store the information in global data. After having assembled the jobs you update the dataset with what you stored in global data.
Read the section on maintaining global data in the help on the Environment class. As the Switch class inherits from the Environment class all methods for that class are available in the s variable.