121 Commits

Author SHA1 Message Date
Jamie Hardt
99c18478e6 Continuity implementation 2021-06-06 21:51:42 -07:00
Jamie Hardt
2656aaaf20 Style fixes 2021-06-06 20:24:52 -07:00
Jamie Hardt
3882939833 Style fixes 2021-06-06 20:24:26 -07:00
Jamie Hardt
e0b2d00332 Style fixes 2021-06-06 20:22:51 -07:00
Jamie Hardt
5559e1b057 Style fixes 2021-06-06 16:46:29 -07:00
Jamie Hardt
3cd5a99dbb create folders for CSV outputs 2021-06-06 16:42:23 -07:00
Jamie Hardt
898fd96808 refactorings 2021-06-06 16:28:44 -07:00
Jamie Hardt
80305f6098 Tweaked validation behavior 2021-06-06 15:50:29 -07:00
Jamie Hardt
338e8c8fa6 Fixed unit test 2021-06-06 15:41:16 -07:00
Jamie Hardt
6f37de4f20 Refactoring entity creation 2021-06-06 15:35:54 -07:00
Jamie Hardt
51eada4cde Refactored report template 2021-06-06 14:03:25 -07:00
Jamie Hardt
ddd2cdb873 Refactored report template 2021-06-06 13:54:59 -07:00
Jamie Hardt
b60795fd95 Addressed Line count feature 2021-06-06 13:11:29 -07:00
Jamie Hardt
d1a5430923 Surveyed TODOs and style 2021-06-05 19:39:33 -07:00
Jamie Hardt
5416433c82 Update README 2021-06-05 19:09:29 -07:00
Jamie Hardt
bda68b9c3b Knocked down a todo 2021-06-05 19:08:32 -07:00
Jamie Hardt
40b80b9997 tiwddle 2021-06-04 18:21:12 -07:00
Jamie Hardt
2f55b16750 Marker application test 2021-06-04 17:21:42 -07:00
Jamie Hardt
be4a4b91c0 Adjusted header appearance 2021-06-03 22:08:48 -07:00
Jamie Hardt
7ab2907b26 Fixed some unit tests 2021-06-03 21:01:20 -07:00
Jamie Hardt
55e19a3b8d v0.8 2021-06-03 20:47:27 -07:00
Jamie Hardt
784699050a Removed old parser code 2021-06-03 20:31:12 -07:00
Jamie Hardt
55324a0f82 Refactoring line count 2021-06-03 12:24:31 -07:00
Jamie Hardt
5fb4c389f4 Refactoring reports to use docparser 2021-06-03 10:37:49 -07:00
Jamie Hardt
b46fc85b16 Refactoring reports to use docparser 2021-06-03 10:19:33 -07:00
Jamie Hardt
caf4317b76 more refactoring for new docparser 2021-06-02 15:40:06 -07:00
Jamie Hardt
24c5a87358 Updated unit tests 2021-06-02 11:06:18 -07:00
Jamie Hardt
8d4058d026 Adapting existing tests to new parser 2021-06-01 23:30:30 -07:00
Jamie Hardt
8654fdb847 Twiddle 2021-06-01 22:28:19 -07:00
Jamie Hardt
594830144d ADR line tests 2021-06-01 22:27:35 -07:00
Jamie Hardt
1a43888c43 Implementation 2021-06-01 22:21:22 -07:00
Jamie Hardt
ade1cc463a Implementation 2021-06-01 21:55:36 -07:00
Jamie Hardt
f5acfd2362 Refactored tag compiler into new file 2021-06-01 21:08:15 -07:00
Jamie Hardt
945ba6102b Some refinement 2021-06-01 21:04:13 -07:00
Jamie Hardt
2466db1401 Created basic test case 2021-06-01 19:12:56 -07:00
Jamie Hardt
76a90363fb TagMapping implementation
Silly bug I made
2021-06-01 14:48:10 -07:00
Jamie Hardt
be7a01cab9 TagMapping implementation
Silly bug I made
2021-06-01 14:40:58 -07:00
Jamie Hardt
32e3cfc594 TagMapping implementation 2021-06-01 14:02:40 -07:00
Jamie Hardt
c6be2ba404 Bunch of implementation 2021-05-31 23:22:16 -07:00
Jamie Hardt
3502eaddfd Some functional util code 2021-05-31 15:16:23 -07:00
Jamie Hardt
2e08499f70 Some functional util code 2021-05-28 16:31:57 -07:00
Jamie Hardt
2cc4f423cf Type annotations 2021-05-27 23:08:20 -07:00
Jamie Hardt
7558e8f63c PT Session file 2021-05-27 22:20:46 -07:00
Jamie Hardt
f514dde259 Refactoring adr parser 2021-05-27 21:53:44 -07:00
Jamie Hardt
3dd36a9901 Refactoring tag parser 2021-05-27 21:34:43 -07:00
Jamie Hardt
d1bb5990b2 More typing, removed dead code in tc convert 2021-05-27 20:32:24 -07:00
Jamie Hardt
859a427fc4 Appearance tweaks, reorganized display code 2021-05-27 20:23:28 -07:00
Jamie Hardt
644c8a6f5d Type annotations 2021-05-27 11:26:28 -07:00
Jamie Hardt
23174e3a97 Work on rewrting the parser
And a unit test
2021-05-27 11:05:53 -07:00
Jamie Hardt
3889f871b8 Work on rewrting the parser 2021-05-26 16:50:46 -07:00
Jamie Hardt
f4ad4a5b5d Note to self 2021-05-26 00:44:35 -07:00
Jamie Hardt
a9596c444d Eh committing the example 2021-05-26 00:26:30 -07:00
Jamie Hardt
52bbecb909 Eh committing the example 2021-05-26 00:25:52 -07:00
Jamie Hardt
b50d83c748 AvidMarker creation, reworking 2021-05-26 00:03:57 -07:00
Jamie Hardt
1294d5e208 Removed some old options that aren't needed anymore 2021-05-25 22:39:08 -07:00
Jamie Hardt
9633bcdefb Added CSV output options 2021-05-25 19:16:29 -07:00
Jamie Hardt
20b84623ff Added CSV output 2021-05-25 18:37:45 -07:00
Jamie Hardt
9927488f1e Reworked function calls to report functions
To make things make more sense.
2021-05-25 12:53:32 -07:00
Jamie Hardt
2f037ad4db report finesse 2021-05-25 12:23:20 -07:00
Jamie Hardt
f5a80b3bdf Character reports 2021-05-25 12:03:24 -07:00
Jamie Hardt
38fc92183d Tweaked line coutn 2021-05-25 10:56:39 -07:00
Jamie Hardt
9d2c50b219 Tweaked parameters 2021-05-25 10:49:01 -07:00
Jamie Hardt
f2fd0e5da3 Appearance tweaks to the reports
Also fixed a mistake in the example spotting document
2021-05-24 19:27:37 -07:00
Jamie Hardt
4ea8330921 Charade example and many bugfixes to make it work 2021-05-24 18:57:20 -07:00
Jamie Hardt
d92b85897f Added Charade.ptx 2021-05-24 18:04:52 -07:00
Jamie Hardt
88b32da0bd Merge remote-tracking branch 'origin/master' 2021-05-24 18:02:44 -07:00
Jamie Hardt
e9a23fb680 Added Charade.ptx 2021-05-24 18:02:36 -07:00
Jamie Hardt
d6c5026bf0 Fixed bug in report generation fonts 2021-05-20 22:40:54 -07:00
Jamie Hardt
e52bddc2fa Updated 2021-05-20 20:20:43 -07:00
Jamie Hardt
4d9538b997 Enhanced Avid marker export 2021-05-20 20:13:25 -07:00
Jamie Hardt
be92cbe884 Style 2021-05-20 18:11:58 -07:00
Jamie Hardt
ccf35283a7 Style 2021-05-20 17:56:26 -07:00
Jamie Hardt
d52c063607 Style 2021-05-20 17:55:55 -07:00
Jamie Hardt
9a5273bac5 Line Count enhancements 2021-05-20 17:53:39 -07:00
Jamie Hardt
ac5e7ffc35 note 2021-05-20 15:37:14 -07:00
Jamie Hardt
f8cadcb9dc Refactoring layout things 2021-05-20 15:34:00 -07:00
Jamie Hardt
b1722966c6 Cleaning up style issues 2021-05-20 11:57:06 -07:00
Jamie Hardt
00a05506d4 Document generation tweaks 2021-05-19 19:33:16 -07:00
Jamie Hardt
fe93985041 Create folders for different reports 2021-05-19 18:42:59 -07:00
Jamie Hardt
0fa37d7f19 Twiddle 2021-05-19 18:29:38 -07:00
Jamie Hardt
90aa01749e More reorganizing in here 2021-05-19 18:29:09 -07:00
Jamie Hardt
efcb0acd08 Reorganized commands a little 2021-05-19 18:24:42 -07:00
Jamie Hardt
35f7672d61 Tweaked priority field 2021-05-19 13:27:48 -07:00
Jamie Hardt
67b785c2c2 Added page count footer implementation
Added another validation
2021-05-19 13:25:51 -07:00
Jamie Hardt
6cb93ea75f Basic talent scripts
Fooling around with uncide
2021-05-18 22:11:19 -07:00
Jamie Hardt
9a1dff2c3c Basic talent scripts 2021-05-18 21:25:06 -07:00
Jamie Hardt
8c95930aaa More Line Count implementaion but might rethink 2021-05-18 17:41:56 -07:00
Jamie Hardt
2aca5a3e8f Very rough implementation of line count 2021-05-17 23:04:38 -07:00
Jamie Hardt
8a067984eb Format twiddles 2021-05-16 22:19:35 -07:00
Jamie Hardt
9e00ba0dab Twiddle 2021-05-16 21:56:39 -07:00
Jamie Hardt
a112e73a64 Supervisor page impl 2021-05-16 21:46:39 -07:00
Jamie Hardt
0e5b1e6dc9 Supervisor report implementation 2021-05-16 18:53:40 -07:00
Jamie Hardt
5808f3a4ac Widened field in --show-available-keys 2021-05-16 18:13:00 -07:00
Jamie Hardt
a7a472d63f Got imported code running for test purposes 2021-05-16 15:22:25 -07:00
Jamie Hardt
8b85793826 Dumped PDF code from my jupyter notebook into source 2021-05-16 15:14:27 -07:00
Jamie Hardt
e78e55639d Command line plumbing 2021-05-16 14:48:38 -07:00
Jamie Hardt
294e0732df Update version to v0.7 2021-05-16 14:11:28 -07:00
Jamie Hardt
9a88e5ff4e .idea files udpate 2021-05-16 14:08:33 -07:00
Jamie Hardt
f161532768 Added attrs argument to TreeBuilder.start()
Which is now required I guess?
2021-05-16 14:07:39 -07:00
Jamie Hardt
c937a3745b Create jamie.xml 2021-05-15 21:21:18 -07:00
Jamie Hardt
d17f6951d6 Implementing validation feature 2020-10-22 14:10:28 -07:00
Jamie Hardt
6ad29ccf8b Refactoring 2020-10-22 11:17:20 -07:00
Jamie Hardt
bb504ed7ce Movie enhancements 2020-10-22 10:42:46 -07:00
Jamie Hardt
5db8a01271 A refactor to shorten this method 2020-10-21 13:47:40 -07:00
Jamie Hardt
99096f7dec Fixed a bug 2020-10-21 13:11:58 -07:00
Jamie Hardt
d734180010 Implementing movie tracking 2020-10-21 13:08:38 -07:00
Jamie Hardt
b0e7703303 Update __init__.py 2020-10-10 23:20:04 -07:00
Jamie Hardt
30daec452d Update __init__.py 2020-10-10 23:17:03 -07:00
Jamie Hardt
6ca1e5532d Update setup.py 2020-10-10 23:14:57 -07:00
Jamie Hardt
a0d386b666 Update setup.py 2020-10-10 23:13:05 -07:00
Jamie Hardt
3226e63f1d Update __init__.py 2020-10-10 23:05:07 -07:00
Jamie Hardt
3a597b5046 Update __init__.py
Version 0.5
2020-10-10 23:02:59 -07:00
Jamie Hardt
b5d9b5acc2 Update setup.py
Added package_data
2020-10-10 22:59:44 -07:00
Jamie Hardt
9f2a080f6b Enhanced Avid marker export 2020-05-18 19:08:59 -07:00
Jamie Hardt
1903e2a1f9 Update SRT.xsl
Changed "encoding" attribute to something that should work better.
2020-05-17 13:52:01 -07:00
Jamie Hardt
69491d98d7 Create SRT.xsl
Added XSLT for creating SRT subtitles.
2020-05-17 12:50:02 -07:00
Jamie Hardt
7816f08912 Update __main__.py
Fixed typo
2020-05-17 12:49:37 -07:00
Jamie Hardt
44388c6b7d Update __main__.py
Fixed text formatting
2020-05-17 11:52:36 -07:00
Jamie Hardt
9daedca4de More documentation
Documentation of new command-line opts.
2020-05-17 11:46:29 -07:00
Jamie Hardt
93a014bdc0 Added command to extract single reels 2020-05-17 11:27:06 -07:00
Jamie Hardt
9bb2ae136a Added some more documentation 2020-05-15 18:11:07 -07:00
53 changed files with 3191 additions and 994 deletions

1
.gitignore vendored
View File

@@ -103,3 +103,4 @@ venv.bak/
# mypy # mypy
.mypy_cache/ .mypy_cache/
.DS_Store .DS_Store
/example/Charade/Session File Backups/

19
.idea/dictionaries/jamie.xml generated Normal file
View File

@@ -0,0 +1,19 @@
<component name="ProjectDictionaryState">
<dictionary name="jamie">
<words>
<w>adlib</w>
<w>bottompadding</w>
<w>fmpxml</w>
<w>futura</w>
<w>leftpadding</w>
<w>lineafter</w>
<w>linebefore</w>
<w>ptulsconv</w>
<w>retval</w>
<w>smpte</w>
<w>subheader</w>
<w>timecode</w>
<w>timespan</w>
</words>
</dictionary>
</component>

View File

@@ -1,6 +1,7 @@
<component name="ProjectDictionaryState"> <component name="ProjectDictionaryState">
<dictionary name="jamiehardt"> <dictionary name="jamiehardt">
<words> <words>
<w>fmpxmlresult</w>
<w>frac</w> <w>frac</w>
<w>mins</w> <w>mins</w>
</words> </words>

8
.idea/misc.xml generated
View File

@@ -1,4 +1,10 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<project version="4"> <project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.7" project-jdk-type="Python SDK" /> <component name="ProjectRootManager" version="2" project-jdk-name="Python 3.8 (ptulsconv)" project-jdk-type="Python SDK" />
<component name="PyPackaging">
<option name="earlyReleasesAsUpgrades" value="true" />
</component>
<component name="PythonCompatibilityInspectionAdvertiser">
<option name="version" value="3" />
</component>
</project> </project>

9
.idea/ptulsconv.iml generated
View File

@@ -1,11 +1,10 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4"> <module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager"> <component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" /> <content url="file://$MODULE_DIR$">
<orderEntry type="jdk" jdkName="Python 3.7" jdkType="Python SDK" /> <excludeFolder url="file://$MODULE_DIR$/venv" />
</content>
<orderEntry type="jdk" jdkName="Python 3.8 (ptulsconv)" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" /> <orderEntry type="sourceFolder" forTests="false" />
</component> </component>
<component name="TestRunnerService">
<option name="PROJECT_TEST_RUNNER" value="Unittests" />
</component>
</module> </module>

View File

@@ -4,32 +4,13 @@
![Upload Python Package](https://github.com/iluvcapra/ptulsconv/workflows/Upload%20Python%20Package/badge.svg) ![Upload Python Package](https://github.com/iluvcapra/ptulsconv/workflows/Upload%20Python%20Package/badge.svg)
# ptulsconv # ptulsconv
Read Pro Tools text exports and generate XML, JSON, reports Read Pro Tools text exports and generate JSON, PDF reports.
## Quick Example ## Notice!
At this time we're using `ptulsconv` mostly for converting ADR notes in a Pro Tools session
into an XML document we can import into Filemaker Pro.
% ptulsconv STAR_WARS_IV_R1_ADR_Notes_PT_Text_Export.txt > SW4_r1_ADR_Notes.xml
% xmllint --format SW4_r1_ADR_Notes.xml
<?xml version="1.0"?>
<FMPXMLRESULT xmlns="http://www.filemaker.com/fmpxmlresult">
<ERRORCODE>0</ERRORCODE>
<PRODUCT NAME="ptulsconv" VERSION="0.0.1"/>
<DATABASE DATEFORMAT="MM/dd/yy" LAYOUT="summary"
NAME="STAR_WARS_IV_R1_ADR_Notes_PT_Text_Export.txt"
RECORDS="84" TIMEFORMAT="hh:mm:ss"/>
<METADATA>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Title" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Supervisor" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Client" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Scene" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Version" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Reel" TYPE="TEXT"/>
<FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Start" TYPE="TEXT"/>
[... much much more]
At this time there are a lot of changes in the HEAD of this package and you should use the last posted Pypi package.
New features and much better reporting, including native PDF reports, are coming soon!
## Installation ## Installation
The easiest way to install on your site is to use `pip`: The easiest way to install on your site is to use `pip`:
@@ -87,7 +68,7 @@ The output will contain the range:
### Fields in Track Names and Markers ### Fields in Track Names and Markers
Fields set in track names, and in track comments, will be applied to *each* clip on that track. If a track comment Fields set in track names, and in track comments, will be applied to each clip on that track. If a track comment
contains the text `{Dept=Foley}` for example, every clip on that track will have a "Foley" value in a "Dept" column. contains the text `{Dept=Foley}` for example, every clip on that track will have a "Foley" value in a "Dept" column.
Likewise, fields set on the session name will apply to all clips in the session. Likewise, fields set on the session name will apply to all clips in the session.

BIN
example/Charade/Charade.ptx Normal file

Binary file not shown.

170
example/Charade/Charade.txt Normal file
View File

@@ -0,0 +1,170 @@
SESSION NAME: Charade
SAMPLE RATE: 48000.000000
BIT DEPTH: 24-bit
SESSION START TIMECODE: 00:59:00:00
TIMECODE FORMAT: 25 Frame
# OF AUDIO TRACKS: 13
# OF AUDIO CLIPS: 2
# OF AUDIO FILES: 1
T R A C K L I S T I N G
TRACK NAME: Scenes
COMMENTS:
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 @ {Sc=Logos} 01:00:00:00 01:00:17:21 00:00:17:21 Unmuted
1 2 @ {Sc=1-2 Ext. French Countryside - Dusk} 01:00:17:21 01:01:00:24 00:00:43:03 Unmuted
1 3 @ {Sc=Main Titles} 01:01:00:24 01:03:04:02 00:02:03:02 Unmuted
1 4 @ {Sc=6 Ext. Megve - Day} 01:03:04:02 01:04:29:05 00:01:25:02 Unmuted
1 5 @ {Sc=8A Swimming Pool - Onto Terrace} 01:04:29:05 01:07:01:14 00:02:32:09 Unmuted
1 6 @ {Sc=11-12 Ext. Ave Foch - Lampert Apartment} 01:07:01:14 01:07:28:22 00:00:27:08 Unmuted
1 7 @ {Sc=15 Int. Apartment Landing} 01:07:28:22 01:07:39:16 00:00:10:19 Unmuted
1 8 @ {Sc=17 In. Lampert House - Empty} 01:07:39:16 01:08:57:21 00:01:18:05 Unmuted
1 9 @ {Sc=25 Int. Morgue} 01:08:57:21 01:09:38:23 00:00:41:02 Unmuted
1 10 @ {Sc=28 Int. Grandpierre's Office} 01:09:38:23 01:13:23:16 00:03:44:18 Unmuted
1 11 @ {Sc=36 Int. Lampert Apartment - Night} 01:13:23:16 01:15:18:13 00:01:54:21 Unmuted
1 12 @ {Sc=38A Int. Funeral Chapel - Day} 01:15:18:13 01:18:50:20 00:03:32:07 Unmuted
1 13 @ {Sc=63 Ext/Int American Embassy - Establishing} 01:18:50:20 01:19:09:20 00:00:19:00 Unmuted
1 14 @ {Sc=70 Int. Barholomew's Office} 01:19:09:20 01:25:12:07 00:06:02:12 Unmuted
1 15 @ {Sc=77 Ext. Esplanade des Champs-Elysées} 01:25:12:07 01:26:53:03 00:01:40:20 Unmuted
1 16 @ {Sc=88 Int. Nightclub - Night} 01:26:53:03 01:30:07:06 00:03:14:03 Unmuted
1 17 @ {Sc=102 Int. Nightclub Lounge - Night} 01:30:07:06 01:31:49:18 00:01:42:12 Unmuted
1 18 @ {Sc=108 Int. Hotel Lobby} 01:31:49:18 01:32:44:17 00:00:54:23 Unmuted
1 19 @ {Sc=109 Int. Elevator} 01:32:44:17 01:33:05:20 00:00:21:03 Unmuted
1 20 @ {Sc=110 Int. Hotel Third Landing} 01:33:05:20 01:33:55:07 00:00:49:12 Unmuted
1 21 @ {Sc=112 Int. Reggie's Room - Night} 01:33:55:07 01:34:23:00 00:00:27:17 Unmuted
1 22 @ {Sc=116 Int. Hotel Corridor} 01:34:23:00 01:34:46:16 00:00:23:16 Unmuted
1 23 @ {Sc=120 Int. Reggie's Room - Night} 01:34:46:16 01:35:25:17 00:00:39:01 Unmuted
1 24 @ {Sc=122 Ext. Hotel Window - Night} 01:35:25:17 01:36:49:04 00:01:23:11 Unmuted
1 25 @ {Sc=132 Int. Gideon's Hotel Room - Night} 01:36:49:04 01:38:13:08 00:01:24:04 Unmuted
1 26 @ {Sc=134 Int. Reggie's Room - Night} 01:38:13:08 01:38:29:16 00:00:16:08 Unmuted
1 27 @ {Sc=134 Int. Reggie's Room - Night} 01:38:29:16 01:40:18:07 00:01:48:16 Unmuted
1 28 @ {Sc=139/140 Int. Hotel Room/Phone Booth Intercut} 01:40:18:07 01:40:54:18 00:00:36:11 Unmuted
1 29 @ {Sc=142 Int. Reggie's Room - Night} 01:40:54:18 01:41:46:15 00:00:51:22 Unmuted
TRACK NAME: PETER
COMMENTS: $CN=1 {Actor=Cary Grant} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 "Does this belong to you?" (alt for "Does HE belong to you?" {R=Replace Line} $QN=P101 01:05:10:16 01:05:11:19 00:00:01:03 Unmuted
1 2 "Well I telephones by nobody answered." {R=Off mic} $QN=P102 01:13:47:24 01:13:49:19 00:00:01:19 Unmuted
1 3 "It's in all the afternoon papers." {R=Replace Line} {Note=Adding "ALL"} $QN=P103 01:13:59:21 01:14:01:11 00:00:01:14 Unmuted
1 4 "Here you are." {R=Replace temp} $QN=P104 01:33:08:00 01:33:09:01 00:00:01:01 Unmuted
1 5 "On the street where you live..." {R=Replace temp} $QN=P105 01:33:10:09 01:33:12:03 00:00:01:19 Unmuted
1 6 (adlib response to REGGIE) {R=Added/Replaces sync} [ADLIB] $QN=P106 01:34:27:10 01:34:29:03 00:00:01:18 Unmuted
1 7 (effort add PUNCH efforts, react to GETTING PUNCHED) {R=Added} [EFF] $QN=P107 01:34:31:11 01:34:41:23 00:00:10:12 Unmuted
1 8 "… And close these windows after me." {R=Replace temp} $QN=P108 01:35:19:16 01:35:21:11 00:00:01:20 Unmuted
1 9 (effort LEAPING to balcony) [EFF] {R=Added} $QN=P109 01:36:13:02 01:36:15:06 00:00:02:04 Unmuted
1 10 "It's me, Peter." {R=Performance} {Note=More voice, call through door} $QN=P110 01:38:32:01 01:38:33:03 00:00:01:02 Unmuted
TRACK NAME: REGGIE
COMMENTS: $CN=2 {Actor=Audrey Hepburn} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 (react to getting squirted by gun) {R=Added} [EFF] $QN=R101 01:03:40:02 01:03:41:19 00:00:01:16 Unmuted
1 2 "Look I admit I came to Paris to escape American Provincial but that doesn't mean I'm ready for French Traditional!" {R=Clarity} {Note=Low Priority} $QN=R102 01:04:45:22 01:04:50:15 00:00:04:18 Unmuted
1 3 "Oh, no— you see, I don't really love him." {R=Clarity} $QN=R103 01:06:14:17 01:06:16:15 00:00:01:22 Unmuted
1 4 (reactions to empty house, turning open cupboards etc.) {R=Added} [EFF] $QN=R104 01:07:41:13 01:08:41:19 00:01:00:06 Unmuted
1 5 (effort RUN INTO Grandpierre) {R=Added} [EFF] $QN=R105 01:08:41:19 01:08:45:12 00:00:03:17 Unmuted
1 6 "I know, I'm sorry." {R=Replace Sync} {Note=More hesitant} $QN=R106 01:10:36:00 01:10:38:06 00:00:02:06 Unmuted
1 7 "Misses Lampert, Misses Charles Lampert." {R=Clarity} {Note=Prounonce P of Lampert harder} $QN=R107 01:19:30:22 01:19:32:18 00:00:01:21 Unmuted
1 8 "Mister Bartholomew this is Regina Lampert— Mister Bartholomew I just saw one of those me—" {R=Clarity} $QN=R108 01:30:24:12 01:30:28:16 00:00:04:04 Unmuted
1 9 "Where?" {R=Replace temp} $QN=R109 01:33:09:07 01:33:09:24 00:00:00:16 Unmuted
1 10 "Peter? … Peter? … Peter are you alright?" {R=More sotto voce} $QN=R110 01:34:53:10 01:35:01:02 00:00:07:16 Unmuted
TRACK NAME: BARTHOLOMEW
COMMENTS: $CN=3 {Actor=Walter Matthau} $Mins=8
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 "Is there anything wrong, Miss Tompkins?" {R=Replace offscreen} $QN=B101 01:19:17:07 01:19:19:07 00:00:02:00 Unmuted
1 2 "Oh yes, uh, please— uh come in, Misses Lampert." {R=Clarity} {Note=Harder P on Lampert} $QN=B102 01:19:33:02 01:19:37:13 00:00:04:11 Unmuted
1 3 "You're Charles Lampert's widow, yes?" {R=Clarity} $QN=B103 01:20:03:06 01:20:04:22 00:00:01:16 Unmuted
TRACK NAME: TEX
COMMENTS: $CN=4 {Actor=James Coburn} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 "But if you do find that money…" {R=Accent} $QN=T101 01:37:50:08 01:37:52:15 00:00:02:07 Unmuted
1 2 "You ain't gonna forget to tell your buddies about it are ya?" {R=Accent} $QN=T102 01:37:53:15 01:37:55:24 00:00:02:08 Unmuted
TRACK NAME: SCOBIE
COMMENTS: $CN=5 {Actor=George Kennedy} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 (effort HEAVY BREATHING) {R=Added} [EFF] $QN=SC101 01:34:05:10 01:34:15:04 00:00:09:18 Unmuted
1 2 (effort add PUNCH efforts, react to GETTING PUNCHED) {R=Added} [EFF] $QN=SC102 01:34:31:11 01:34:41:23 00:00:10:12 Unmuted
TRACK NAME: SYLVIE
COMMENTS: $CN=6 {Actor=Dominique Minot} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 "It is infuriating that your unhappiness does not turn to fat!" {R=Accent} $QN=SY101 01:04:25:08 01:04:28:19 00:00:03:11 Unmuted
TRACK NAME: GIDEON
COMMENTS: $CN=7 {Actor=Ned Glass} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 (effort) "OWWW!" (kicked in shin) {R=Added} [EFF] $QN=GD101 01:29:55:21 01:29:58:16 00:00:02:19 Unmuted
1 2 "Eh" (sotto/closed-mouth reaction) {R=Added} [ADLIB] $QN=GD102 01:38:08:16 01:38:10:07 00:00:01:16 Unmuted
TRACK NAME: JEAN-LOUIS
COMMENTS: $CN=8m {Actor=Thomas Chelimsky} $Mins=5
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 "When you get your divorce, are you going back to America?" {R=Revioce} $QN=JL101 01:07:14:07 01:07:17:07 00:00:03:00 Unmuted
1 2 "Yes, of course, but if you went back and wrote me a letter—" {R=Revoice} $QN=JL102 01:07:18:20 01:07:21:18 00:00:02:23 Unmuted
1 3 "Okay." {R=Revoice} $QN=JL103 01:07:24:13 01:07:25:01 00:00:00:13 Unmuted
TRACK NAME: Group
COMMENTS: $CN=99g {Char=Group} {Actor=Per LG} $Mins=3
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 ALL (Pool walla, FC "Whoo!" on man diving.) $QN=G101 01:04:29:05 01:04:43:09 00:00:14:04 Unmuted
1 2 (1M) "Madame" / "Miss" / "Merci" {R=Replace on-screen} $QN=G102 01:07:35:23 01:07:36:06 00:00:00:07 Unmuted
1 3 "D'accord" {R=Replace Futz} $QN=G103 01:10:47:20 01:10:48:08 00:00:00:13 Unmuted
1 4 (ALL KIDS) React to Punch and Judy Show, laughter bursts $QN=G104 01:25:12:07 01:25:23:22 00:00:11:15 Unmuted
1 5 (ALL) Laugh! Prelap cut $QN=G105 01:25:33:18 01:25:38:10 00:00:04:17 Unmuted
TRACK NAME: Group.dup1
COMMENTS: $CN=99g {Char=Group} {Actor=Per LG} $Mins=3
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 (2M 2F) Detail reaction to show $QN=G106 01:25:14:03 01:25:15:23 00:00:01:20 Unmuted
TRACK NAME: Group.dup2
COMMENTS: $CN=99g {Char=Group} {Actor=Per LG} $Mins=3
USER DELAY: 0 Samples
STATE:
CHANNEL EVENT CLIP NAME START TIME END TIME DURATION STATE
1 1 (1M) Boy reacts to show [ADLIB] [TBW] $QN=G107 01:25:15:21 01:25:18:12 00:00:02:16 Unmuted
1 2 (1M) Pointing boy $QN=G108 01:25:20:02 01:25:22:16 00:00:02:14 Unmuted
M A R K E R S L I S T I N G
# LOCATION TIME REFERENCE UNITS NAME COMMENTS
1 01:00:00:00 2880000 Samples {Title=Charade} {Client=Stanley Donen Films/Universal} {Supv=Allan Morrison} {Spot=2021-0520} $Reel=R1 [ADR]
2 01:18:50:20 57159360 Samples $Reel=R2
3 01:36:49:04 108919680 Samples $Reel=R3

Binary file not shown.

View File

@@ -1,24 +1,38 @@
.\" Manpage for ptulsconv .\" Manpage for ptulsconv
.\" Contact https://github.com/iluvcapra/ptulsconv .\" Contact https://github.com/iluvcapra/ptulsconv
.TH ptulsconv 1 "12 Feb 2020" "0.3.3" "ptulsconv man page" .TH ptulsconv 1 "15 May 2020" "0.4.0" "ptulsconv man page"
.SH NAME .SH NAME
.BR "ptulsconv" " \- convert .BR "ptulsconv" " \- convert
.IR "Avid Pro Tools" " text exports" .IR "Avid Pro Tools" " text exports"
.SH SYNOPSIS .SH SYNOPSIS
ptulsconv [OPTIONS] Export.txt ptulsconv [OPTIONS] Export.txt
.SH DESCRIPTION .SH DESCRIPTION
Description Convert a Pro Tools text export into a flat list of clip names with timecodes. A tagging
language is interpreted to add columns and type the data. The default output format is
an XML file for import into Filemaker Pro.
.SH OPTIONS .SH OPTIONS
.IP "-h, --help" .IP "-h, --help"
show a help message and exit show a help message and exit.
.TP .TP
.RI "-i " "TC" .RI "-i " "TC"
Don't output events before this timecode, and offset all remaining Drop events before this timecode.
events to start at this timecode.
.TP .TP
.RI "-o " "TC" .RI "-o " "TC"
Don't output events occurring after this timecode. Drop events after this timecode.
.SH SEE ALSO .TP
See Also .RI "-m "
Include muted clips.
.TP
.RI "--json "
Output a JSON document instead of XML. (--xform will have no effect.)
.TP
.RI "--xform " "NAME"
Convert the output with a built-in output transform.
.TP
.RI "--show-available-tags"
Print a list of tags that are interpreted and exit.
.TP
.RI "--show-available-transforms"
Print a list of built-in output transforms and exit.
.SH AUTHOR .SH AUTHOR
Jamie Hardt (contact at https://github.com/iluvcapra/ptulsconv) Jamie Hardt (contact at https://github.com/iluvcapra/ptulsconv)

View File

@@ -1,7 +1,5 @@
from .ptuls_grammar import protools_text_export_grammar from ptulsconv.docparser.ptuls_grammar import protools_text_export_grammar
from .ptuls_parser_visitor import DictionaryParserVisitor
from .transformations import TimecodeInterpreter
__version__ = '0.4.0' __version__ = '0.8.0'
__author__ = 'Jamie Hardt' __author__ = 'Jamie Hardt'
__license__ = 'MIT' __license__ = 'MIT'

View File

@@ -1,47 +1,72 @@
from ptulsconv.commands import convert, dump_field_map, dump_xform_options
from ptulsconv import __name__, __version__, __author__
from optparse import OptionParser, OptionGroup from optparse import OptionParser, OptionGroup
from .reporting import print_status_style, print_banner_style, print_section_header_style, print_fatal_error
import datetime import datetime
import sys import sys
import traceback from ptulsconv import __name__, __version__, __author__
from ptulsconv.commands import convert
from ptulsconv.reporting import print_status_style, print_banner_style, print_section_header_style, print_fatal_error
# TODO: Support Top-level modes
# Modes we want:
# - "raw" : Output the parsed text export document with no further processing, as json
# - "tagged"? : Output the parsed result of the TagCompiler
# - "doc" : Generate a full panoply of PDF reports contextually based on tagging
def dump_field_map(output=sys.stdout):
from ptulsconv.docparser.tag_mapping import TagMapping
from ptulsconv.docparser.adr_entity import ADRLine
TagMapping.print_rules(ADRLine, output=output)
def main(): def main():
"""Entry point for the command-line invocation"""
parser = OptionParser() parser = OptionParser()
parser.usage = "ptulsconv TEXT_EXPORT.txt" parser.usage = "ptulsconv [options] TEXT_EXPORT.txt"
parser.add_option('-i', dest='in_time', help="Don't output events occurring before this timecode, and offset" parser.add_option('-f', '--format',
" all events relative to this timecode.", metavar='TC') dest='output_format',
parser.add_option('-o', dest='out_time', help="Don't output events occurring after this timecode.", metavar='TC') metavar='FMT',
# parser.add_option('-P', '--progress', default=False, action='store_true', dest='show_progress', choices=['raw', 'tagged', 'doc'],
# help='Show progress bar.') default='doc',
parser.add_option('-m', '--include-muted', default=False, action='store_true', dest='include_muted', help='Set output format, `raw`, `tagged`, `doc`.')
help='Read muted clips.')
parser.add_option('--show-available-tags', dest='show_tags', warn_options = OptionGroup(title="Warning and Validation Options",
action='store_true', parser=parser)
default=False, help='Display tag mappings for the FMP XML output style and exit.')
parser.add_option('--show-available-transforms', dest='show_transforms', warn_options.add_option('-W', action='store_false',
action='store_true', dest='warnings',
default=False, help='Display available built-in XSLT transforms.') default=True,
help='Suppress warnings for common errors (missing code numbers etc.)')
parser.add_option_group(warn_options)
informational_options = OptionGroup(title="Informational Options",
parser=parser,
description='Print useful information and exit without processing '
'input files.')
informational_options.add_option('--show-available-tags',
dest='show_tags',
action='store_true',
default=False,
help='Display tag mappings for the FMP XML '
'output style and exit.')
parser.add_option_group(informational_options)
parser.add_option('--xform', dest='xslt', help="Convert with built-is XSLT transform.",
default=None, metavar='NAME')
(options, args) = parser.parse_args(sys.argv) (options, args) = parser.parse_args(sys.argv)
print_banner_style("%s %s (c) 2020 %s. All rights reserved." % (__name__, __version__, __author__)) print_banner_style("%s %s (c) 2021 %s. All rights reserved." % (__name__, __version__, __author__))
print_section_header_style("Startup") print_section_header_style("Startup")
print_status_style("This run started %s" % (datetime.datetime.now().isoformat() ) ) print_status_style("This run started %s" % (datetime.datetime.now().isoformat()))
if options.show_tags: if options.show_tags:
dump_field_map('ADR') dump_field_map()
sys.exit(0)
if options.show_transforms:
dump_xform_options()
sys.exit(0) sys.exit(0)
if len(args) < 2: if len(args) < 2:
@@ -49,30 +74,16 @@ def main():
parser.print_help(sys.stderr) parser.print_help(sys.stderr)
sys.exit(22) sys.exit(22)
print_status_style("Input file is %s" % (args[1]))
if options.in_time:
print_status_style("Start at time %s" % (options.in_time))
else:
print_status_style("No start time given.")
if options.out_time:
print_status_style("End at time %s." % (options.out_time))
else:
print_status_style("No end time given.")
if options.include_muted:
print_status_style("Muted regions are included.")
else:
print_status_style("Muted regions are ignored.")
try: try:
convert(input_file=args[1], start=options.in_time, end=options.out_time, major_mode = options.output_format
include_muted=options.include_muted, xsl=options.xslt, convert(input_file=args[1], major_mode=major_mode, warnings=options.warnings)
progress=False, output=sys.stdout, log_output=sys.stderr)
except FileNotFoundError as e: except FileNotFoundError as e:
print_fatal_error("Error trying to read input file") print_fatal_error("Error trying to read input file")
raise e raise e
except Exception as e: except Exception as e:
import traceback
print_fatal_error("Error trying to convert file") print_fatal_error("Error trying to convert file")
print("\033[31m" + e.__repr__() + "\033[0m", file=sys.stderr) print("\033[31m" + e.__repr__() + "\033[0m", file=sys.stderr)
print(traceback.format_exc()) print(traceback.format_exc())

View File

@@ -1,10 +1,21 @@
from fractions import Fraction from fractions import Fraction
import re import re
import math import math
from collections import namedtuple
def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int, drop_frame_hint=False, class TimecodeFormat(namedtuple("_TimecodeFormat", "frame_duration logical_fps drop_frame")):
include_fractional=False):
def smpte_to_seconds(self, smpte: str) -> Fraction:
frame_count = smpte_to_frame_count(smpte, self.logical_fps, drop_frame_hint=self.drop_frame)
return frame_count * self.frame_duration
def seconds_to_smpte(self, seconds: Fraction) -> str:
frame_count = int(seconds / self.frame_duration)
return frame_count_to_smpte(frame_count, self.logical_fps, self.drop_frame)
def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int, drop_frame_hint=False) -> int:
""" """
Convert a string with a SMPTE timecode representation into a frame count. Convert a string with a SMPTE timecode representation into a frame count.
@@ -14,7 +25,6 @@ def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int,
:param drop_frame_hint: `True` if the timecode rep is drop frame. This is ignored (and implied `True`) if :param drop_frame_hint: `True` if the timecode rep is drop frame. This is ignored (and implied `True`) if
the last separator in the timecode string is a semicolon. This is ignored (and implied `False`) if the last separator in the timecode string is a semicolon. This is ignored (and implied `False`) if
`frames_per_logical_second` is not 30 or 60. `frames_per_logical_second` is not 30 or 60.
:param include_fractional: If `True` fractional frames will be parsed and returned as a second retval in a tuple
""" """
assert frames_per_logical_second in [24, 25, 30, 48, 50, 60] assert frames_per_logical_second in [24, 25, 30, 48, 50, 60]
@@ -40,14 +50,11 @@ def smpte_to_frame_count(smpte_rep_string: str, frames_per_logical_second: int,
dropped_frames = frames_dropped_per_inst * inst_count dropped_frames = frames_dropped_per_inst * inst_count
frames = raw_frames - dropped_frames frames = raw_frames - dropped_frames
if include_fractional: return frames
return frames, frac
else:
return frames
def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int, drop_frame: bool = False, def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int, drop_frame: bool = False,
fractional_frame: float = None): fractional_frame: float = None) -> str:
assert frames_per_logical_second in [24, 25, 30, 48, 50, 60] assert frames_per_logical_second in [24, 25, 30, 48, 50, 60]
assert fractional_frame is None or fractional_frame < 1.0 assert fractional_frame is None or fractional_frame < 1.0
@@ -73,24 +80,16 @@ def frame_count_to_smpte(frame_count: int, frames_per_logical_second: int, drop_
return "%02i:%02i:%02i%s%02i" % (hh, mm, ss, separator, ff) return "%02i:%02i:%02i%s%02i" % (hh, mm, ss, separator, ff)
def footage_to_frame_count(footage_string, include_fractional=False): def footage_to_frame_count(footage_string):
m = re.search("(\d+)\+(\d+)(\.\d+)?", footage_string) m = re.search("(\d+)\+(\d+)(\.\d+)?", footage_string)
feet, frm, frac = m.groups() feet, frm, frac = m.groups()
feet, frm, frac = int(feet), int(frm), float(frac or 0.0) feet, frm, frac = int(feet), int(frm), float(frac or 0.0)
frames = feet * 16 + frm frames = feet * 16 + frm
if include_fractional: return frames
return frames, frac
else:
return frames
def frame_count_to_footage(frame_count, fractional_frames=None): def frame_count_to_footage(frame_count):
assert fractional_frames is None or fractional_frames < 1.0
feet, frm = divmod(frame_count, 16) feet, frm = divmod(frame_count, 16)
return "%i+%02i" % (feet, frm)
if fractional_frames is None:
return "%i+%02i" % (feet, frm)
else:
return "%i+%02i%s" % (feet, frm, ("%.3f" % fractional_frames)[1:])

View File

@@ -1,187 +1,196 @@
import io import datetime
import json
import os import os
import os.path
import sys import sys
from xml.etree.ElementTree import TreeBuilder, tostring from itertools import chain
import subprocess import csv
import pathlib from typing import List
import ptulsconv
from .reporting import print_section_header_style, print_status_style from .docparser.adr_entity import make_entities
from .reporting import print_section_header_style, print_status_style, print_warning
from .validations import *
# field_map maps tags in the text export to fields in FMPXMLRESULT from ptulsconv.docparser import parse_document
# - tuple field 0 is a list of tags, the first tag with contents will be used as source from ptulsconv.docparser.tag_compiler import TagCompiler
# - tuple field 1 is the field in FMPXMLRESULT from ptulsconv.broadcast_timecode import TimecodeFormat
# - tuple field 2 the constructor/type of the field from fractions import Fraction
adr_field_map = ((['Title', 'PT.Session.Name'], 'Title', str),
(['Supv'], 'Supervisor', str), from ptulsconv.pdf.supervisor_1pg import output_report as output_supervisor_1pg
(['Client'], 'Client', str), from ptulsconv.pdf.line_count import output_report as output_line_count
(['Sc'], 'Scene', str), from ptulsconv.pdf.talent_sides import output_report as output_talent_sides
(['Ver'], 'Version', str), from ptulsconv.pdf.summary_log import output_report as output_summary
(['Reel'], 'Reel', str), from ptulsconv.pdf.continuity import output_report as output_continuity
(['PT.Clip.Start'], 'Start', str),
(['PT.Clip.Finish'], 'Finish', str), from json import JSONEncoder
(['PT.Clip.Start_Seconds'], 'Start Seconds', float),
(['PT.Clip.Finish_Seconds'], 'Finish Seconds', float),
(['PT.Clip.Start_Frames'], 'Start Frames', int),
(['PT.Clip.Finish_Frames'], 'Finish Frames', int),
(['P'], 'Priority', int),
(['QN'], 'Cue Number', str),
(['Char', 'PT.Track.Name'], 'Character Name', str),
(['Actor'], 'Actor Name', str),
(['CN'], 'Character Number', str),
(['R'], 'Reason', str),
(['Rq'], 'Requested by', str),
(['Spot'], 'Spot', str),
(['PT.Clip.Name', 'Line'], 'Line', str),
(['Shot'], 'Shot', str),
(['Note'], 'Note', str),
(['Mins'], 'Time Budget Mins', float),
(['EFF'], 'Effort', str),
(['TV'], 'TV', str),
(['TBW'], 'To Be Written', str),
(['OMIT'], 'Omit', str),
(['ADLIB'], 'Adlib', str),
(['OPT'], 'Optional', str))
def fmp_dump(data, input_file_name, output): class MyEncoder(JSONEncoder):
doc = TreeBuilder(element_factory=None) force_denominator: Optional[int]
doc.start('FMPXMLRESULT', {'xmlns': 'http://www.filemaker.com/fmpxmlresult'}) def default(self, o):
if isinstance(o, Fraction):
doc.start('ERRORCODE') return dict(numerator=o.numerator, denominator=o.denominator)
doc.data('0') else:
doc.end('ERRORCODE') return o.__dict__
doc.start('PRODUCT', {'NAME': ptulsconv.__name__, 'VERSION': ptulsconv.__version__})
doc.end('PRODUCT')
doc.start('DATABASE', {'DATEFORMAT': 'MM/dd/yy', 'LAYOUT': 'summary', 'TIMEFORMAT': 'hh:mm:ss',
'RECORDS': str(len(data['events'])), 'NAME': os.path.basename(input_file_name)})
doc.end('DATABASE')
doc.start('METADATA')
for field in adr_field_map:
tp = field[2]
ft = 'TEXT'
if tp is int or tp is float:
ft = 'NUMBER'
doc.start('FIELD', {'EMPTYOK': 'YES', 'MAXREPEAT': '1', 'NAME': field[1], 'TYPE': ft})
doc.end('FIELD')
doc.end('METADATA')
doc.start('RESULTSET', {'FOUND': str(len(data['events']))})
for event in data['events']:
doc.start('ROW')
for field in adr_field_map:
doc.start('COL')
doc.start('DATA')
for key_attempt in field[0]:
if key_attempt in event.keys():
doc.data(str(event[key_attempt]))
break
doc.end('DATA')
doc.end('COL')
doc.end('ROW')
doc.end('RESULTSET')
doc.end('FMPXMLRESULT')
docelem = doc.close()
xmlstr = tostring(docelem, encoding='unicode', method='xml')
output.write(xmlstr)
import glob def output_adr_csv(lines: List[ADRLine], time_format: TimecodeFormat):
reels = set([ln.reel for ln in lines])
xslt_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt') for n, name in [(n.character_id, n.character_name) for n in lines]:
dir_name = "%s_%s" % (n, name)
os.makedirs(dir_name, exist_ok=True)
os.chdir(dir_name)
for reel in reels:
these_lines = [ln for ln in lines if ln.character_id == n and ln.reel == reel]
def xform_options(): if len(these_lines) == 0:
return glob.glob(os.path.join(xslt_path, "*.xsl")) continue
def dump_xform_options(output=sys.stdout): outfile_name = "%s_%s_%s_%s.csv" % (these_lines[0].title, n, these_lines[0].character_name, reel,)
print("# Available transforms:", file=output)
print("# Transform dir: %s" % (xslt_path), file=output)
for f in xform_options():
base = os.path.basename(f)
name, _ = os.path.splitext(base)
print("# " + name, file=output)
def dump_field_map(field_map_name, output=sys.stdout): with open(outfile_name, mode='w', newline='') as outfile:
output.write("# Map of Tag fields to XML output columns\n") writer = csv.writer(outfile, dialect='excel')
output.write("# (in order of precedence)\n") writer.writerow(['Title', 'Character Name', 'Cue Number',
output.write("# \n") 'Reel', 'Version',
field_map = [] 'Start', 'Finish',
if field_map_name == 'ADR': 'Start Seconds', 'Finish Seconds',
field_map = adr_field_map 'Prompt',
output.write("# ADR Table Fields\n") 'Reason', 'Note', 'TV'])
output.write("# \n") for event in these_lines:
output.write("# Tag Name | FMPXMLRESULT Column | Type | Column \n") this_row = [event.title, event.character_name, event.cue_number,
output.write("# -------------------------+----------------------+---------+--------\n") event.reel, event.version,
time_format.seconds_to_smpte(event.start), time_format.seconds_to_smpte(event.finish),
float(event.start), float(event.finish),
event.prompt,
event.reason, event.note, "TV" if event.tv else ""]
for n, field in enumerate(field_map): writer.writerow(this_row)
for tag in field[0]: os.chdir("..")
output.write("# %-24s-> %-20s | %-8s| %-7i\n" % (tag[:24], field[1][:20], field[2].__name__, n + 1))
#
# def output_avid_markers(lines):
# reels = set([ln['Reel'] for ln in lines if 'Reel' in ln.keys()])
#
# for reel in reels:
# pass
def fmp_transformed_dump(data, input_file, xsl_name, output): def create_adr_reports(lines: List[ADRLine], tc_display_format: TimecodeFormat, reel_list):
pipe = io.StringIO()
print_status_style("Generating base XML")
fmp_dump(data, input_file, pipe)
strdata = pipe.getvalue() print_status_style("Creating ADR Report")
print_status_style("Base XML size %i" % (len(strdata))) output_summary(lines, tc_display_format=tc_display_format)
print_status_style("Running xsltproc") print_status_style("Creating Line Count")
output_line_count(lines, reel_list=reel_list)
xsl_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt', xsl_name + ".xsl") print_status_style("Creating Supervisor Logs directory and reports")
print_status_style("Using xsl: %s" % (xsl_path)) os.makedirs("Supervisor Logs", exist_ok=True)
result = subprocess.run(['xsltproc', xsl_path, '-'], input=strdata, text=True, os.chdir("Supervisor Logs")
stdout=output, shell=False, check=True) output_supervisor_1pg(lines, tc_display_format=tc_display_format)
os.chdir("..")
print_status_style("Creating Director's Logs director and reports")
os.makedirs("Director Logs", exist_ok=True)
os.chdir("Director Logs")
output_summary(lines, tc_display_format=tc_display_format, by_character=True)
os.chdir("..")
print_status_style("Creating CSV outputs")
os.makedirs("CSV", exist_ok=True)
os.chdir("CSV")
output_adr_csv(lines, time_format=tc_display_format)
os.chdir("..")
# print_status_style("Creating Avid Marker XML files")
# os.makedirs("Avid Markers", exist_ok=True)
# os.chdir("Avid Markers")
# output_avid_markers(lines)
# os.chdir("..")
print_status_style("Creating Scripts directory and reports")
os.makedirs("Talent Scripts", exist_ok=True)
os.chdir("Talent Scripts")
output_talent_sides(lines, tc_display_format=tc_display_format)
def convert(input_file, output_format='fmpxml', start=None, end=None, # def parse_text_export(file):
progress=False, include_muted=False, xsl=None, # ast = ptulsconv.protools_text_export_grammar.parse(file.read())
output=sys.stdout, log_output=sys.stderr): # dict_parser = ptulsconv.DictionaryParserVisitor()
with open(input_file, 'r') as file: # parsed = dict_parser.visit(ast)
print_section_header_style('Parsing') # print_status_style('Session title: %s' % parsed['header']['session_name'])
ast = ptulsconv.protools_text_export_grammar.parse(file.read()) # print_status_style('Session timecode format: %f' % parsed['header']['timecode_format'])
dict_parser = ptulsconv.DictionaryParserVisitor() # print_status_style('Fount %i tracks' % len(parsed['tracks']))
parsed = dict_parser.visit(ast) # print_status_style('Found %i markers' % len(parsed['markers']))
# return parsed
print_status_style('Session title: %s' % parsed['header']['session_name'])
print_status_style('Session timecode format: %f' % parsed['header']['timecode_format'])
print_status_style('Fount %i tracks' % len(parsed['tracks']))
print_status_style('Found %i markers' % len(parsed['markers']))
tcxform = ptulsconv.transformations.TimecodeInterpreter() def convert(input_file, major_mode='fmpxml', output=sys.stdout, warnings=True):
tagxform = ptulsconv.transformations.TagInterpreter(show_progress=progress, ignore_muted=(not include_muted),
log_output=log_output)
parsed = tcxform.transform(parsed) session = parse_document(input_file)
parsed = tagxform.transform(parsed) session_tc_format = session.header.timecode_format
if start is not None and end is not None: if major_mode == 'raw':
start_fs = tcxform.convert_time(start, output.write(MyEncoder().encode(session))
frame_rate=parsed['header']['timecode_format'],
drop_frame=parsed['header']['timecode_drop_frame'])['frame_count']
end_fs = tcxform.convert_time(end, else:
frame_rate=parsed['header']['timecode_format'], compiler = TagCompiler()
drop_frame=parsed['header']['timecode_drop_frame'])['frame_count'] compiler.session = session
compiled_events = list(compiler.compile_events())
subclipxform = ptulsconv.transformations.SubclipOfSequence(start=start_fs, end=end_fs) if major_mode == 'tagged':
parsed = subclipxform.transform(parsed) output.write(MyEncoder().encode(compiled_events))
else:
generic_events, adr_lines = make_entities(compiled_events)
# TODO: Breakdown by titles
titles = set([x.title for x in (generic_events + adr_lines)])
assert len(titles) == 1, "Multiple titles per export is not supported"
print(titles)
if warnings:
perform_adr_validations(adr_lines)
if major_mode == 'doc':
print_section_header_style("Creating PDF Reports")
report_date = datetime.datetime.now()
reports_dir = "%s_%s" % (list(titles)[0], report_date.strftime("%Y-%m-%d_%H%M%S"))
os.makedirs(reports_dir, exist_ok=False)
os.chdir(reports_dir)
scenes = sorted([s for s in compiler.compile_all_time_spans() if s[0] == 'Sc'],
key=lambda x: x[2])
output_continuity(scenes=scenes, tc_display_format=session_tc_format,
title=list(titles)[0], client="", supervisor="")
# reels = sorted([r for r in compiler.compile_all_time_spans() if r[0] == 'Reel'],
# key=lambda x: x[2])
reels = ['R1', 'R2', 'R3', 'R4', 'R5', 'R6']
create_adr_reports(adr_lines,
tc_display_format=session_tc_format,
reel_list=sorted(reels))
def perform_adr_validations(lines):
for warning in chain(validate_unique_field(lines,
field='cue_number',
scope='title'),
validate_non_empty_field(lines,
field='cue_number'),
validate_non_empty_field(lines,
field='character_id'),
validate_non_empty_field(lines,
field='title'),
validate_dependent_value(lines,
key_field='character_id',
dependent_field='character_name'),
validate_dependent_value(lines,
key_field='character_id',
dependent_field='actor_name')):
print_warning(warning.report_message())
if output_format == 'json':
json.dump(parsed, output)
elif output_format == 'fmpxml':
if xsl is None:
fmp_dump(parsed, input_file, output)
else:
print_section_header_style("Performing XSL Translation")
print_status_style("Using builtin translation: %s" % (xsl))
fmp_transformed_dump(parsed, input_file, xsl, output)

View File

@@ -0,0 +1 @@
from .doc_parser_visitor import parse_document

View File

@@ -0,0 +1,140 @@
from ptulsconv.docparser.tag_compiler import Event
from typing import Optional, List, Tuple, Any
from dataclasses import dataclass
from fractions import Fraction
from ptulsconv.docparser.tag_mapping import TagMapping
def make_entities(from_events: List[Event]) -> Tuple[List['GenericEvent'], List['ADRLine']]:
generic_events = list()
adr_lines = list()
for event in from_events:
result: Any = make_entity(event)
if type(result) is ADRLine:
result: ADRLine
adr_lines.append(result)
elif type(result) is GenericEvent:
result: GenericEvent
generic_events.append(result)
return generic_events, adr_lines
def make_entity(from_event: Event) -> Optional[object]:
instance = GenericEvent
tag_map = GenericEvent.tag_mapping
if 'QN' in from_event.tags.keys():
instance = ADRLine
tag_map += ADRLine.tag_mapping
new = instance()
TagMapping.apply_rules(tag_map, from_event.tags,
from_event.clip_name, from_event.track_name,
from_event.session_name, new)
new.start = from_event.start
new.finish = from_event.finish
return new
@dataclass
class GenericEvent:
title: Optional[str]
supervisor: Optional[str]
client: Optional[str]
scene: Optional[str]
version: Optional[str]
reel: Optional[str]
start: Optional[Fraction]
finish: Optional[Fraction]
omitted: bool
note: Optional[str]
requested_by: Optional[str]
tag_mapping = [
TagMapping(source='Title', target="title", alt=TagMapping.ContentSource.Session),
TagMapping(source="Supv", target="supervisor"),
TagMapping(source="Client", target="client"),
TagMapping(source="Sc", target="scene"),
TagMapping(source="Ver", target="version"),
TagMapping(source="Reel", target="reel"),
TagMapping(source="Note", target="note"),
TagMapping(source="Rq", target="requested_by"),
TagMapping(source="OMIT", target="omitted",
formatter=(lambda x: len(x) > 0)),
]
@dataclass
class ADRLine(GenericEvent):
priority: Optional[int]
cue_number: Optional[str]
character_id: Optional[str]
character_name: Optional[str]
actor_name: Optional[str]
prompt: Optional[str]
reason: Optional[str]
time_budget_mins: Optional[float]
spot: Optional[str]
shot: Optional[str]
effort: bool
tv: bool
tbw: bool
adlib: bool
optional: bool
tag_mapping = [
TagMapping(source="P", target="priority"),
TagMapping(source="QN", target="cue_number"),
TagMapping(source="CN", target="character_id"),
TagMapping(source="Char", target="character_name", alt=TagMapping.ContentSource.Track),
TagMapping(source="Actor", target="actor_name"),
TagMapping(source="Line", target="prompt", alt=TagMapping.ContentSource.Clip),
TagMapping(source="R", target="reason"),
TagMapping(source="Mins", target="time_budget_mins",
formatter=(lambda n: float(n))),
TagMapping(source="Spot", target="spot"),
TagMapping(source="Shot", target="shot"),
TagMapping(source="EFF", target="effort",
formatter=(lambda x: len(x) > 0)),
TagMapping(source="TV", target="tv",
formatter=(lambda x: len(x) > 0)),
TagMapping(source="TBW", target="tbw",
formatter=(lambda x: len(x) > 0)),
TagMapping(source="ADLIB", target="adlib",
formatter=(lambda x: len(x) > 0)),
TagMapping(source="OPT", target="optional",
formatter=(lambda x: len(x) > 0))
]
def __init__(self):
self.title = None
self.supervisor = None
self.client = None
self.scene = None
self.version = None
self.reel = None
self.start = None
self.finish = None
self.priority = None
self.cue_number = None
self.character_id = None
self.character_name = None
self.actor_name = None
self.prompt = None
self.reason = None
self.requested_by = None
self.time_budget_mins = None
self.note = None
self.spot = None
self.shot = None
self.effort = False
self.tv = False
self.tbw = False
self.omitted = False
self.adlib = False
self.optional = False

View File

@@ -0,0 +1,174 @@
from fractions import Fraction
from ptulsconv.broadcast_timecode import TimecodeFormat
from typing import Tuple, List, Iterator
class SessionDescriptor:
header: "HeaderDescriptor"
files: List["FileDescriptor"]
clips: List["ClipDescriptor"]
plugins: List["PluginDescriptor"]
tracks: List["TrackDescriptor"]
markers: List["MarkerDescriptor"]
def __init__(self, **kwargs):
self.header = kwargs['header']
self.files = kwargs['files']
self.clips = kwargs['clips']
self.plugins = kwargs['plugins']
self.tracks = kwargs['tracks']
self.markers = kwargs['markers']
def markers_timed(self) -> Iterator[Tuple['MarkerDescriptor', Fraction]]:
for marker in self.markers:
marker_time = self.header.convert_timecode(marker.location)
yield marker, marker_time
def tracks_clips(self) -> Iterator[Tuple['TrackDescriptor', 'TrackClipDescriptor']]:
for track in self.tracks:
for clip in track.clips:
yield track, clip
def track_clips_timed(self) -> Iterator[Tuple["TrackDescriptor", "TrackClipDescriptor",
Fraction, Fraction, Fraction]]:
"""
:return: A Generator that yields track, clip, start time, finish time, and timestamp
"""
for track, clip in self.tracks_clips():
start_time = self.header.convert_timecode(clip.start_timecode)
finish_time = self.header.convert_timecode(clip.finish_timecode)
timestamp_time = self.header.convert_timecode(clip.timestamp) \
if clip.timestamp is not None else None
yield track, clip, start_time, finish_time, timestamp_time
class HeaderDescriptor:
session_name: str
sample_rate: float
bit_depth: int
start_timecode: str
timecode_fps: str
timecode_drop_frame: bool
count_audio_tracks: int
count_clips: int
count_files: int
def __init__(self, **kwargs):
self.session_name = kwargs['session_name']
self.sample_rate = kwargs['sample_rate']
self.bit_depth = kwargs['bit_depth']
self.start_timecode = kwargs['start_timecode']
self.timecode_fps = kwargs['timecode_format']
self.timecode_drop_frame = kwargs['timecode_drop_frame']
self.count_audio_tracks = kwargs['count_audio_tracks']
self.count_clips = kwargs['count_clips']
self.count_files = kwargs['count_files']
@property
def timecode_format(self):
return TimecodeFormat(frame_duration=self.frame_duration,
logical_fps=self.logical_fps,
drop_frame=self.timecode_drop_frame)
def convert_timecode(self, tc_string: str) -> Fraction:
return self.timecode_format.smpte_to_seconds(tc_string)
@property
def start_time(self) -> Fraction:
"""
The start time of this session.
:return: Start time in seconds
"""
return self.convert_timecode(self.start_timecode)
@property
def logical_fps(self) -> int:
return self._get_tc_format_params[0]
@property
def frame_duration(self) -> Fraction:
return self._get_tc_format_params[1]
@property
def _get_tc_format_params(self) -> Tuple[int, Fraction]:
frame_rates = {"23.976": (24, Fraction(1001, 24_000)),
"24": (24, Fraction(1, 24)),
"25": (25, Fraction(1, 25)),
"29.97": (30, Fraction(1001, 30_000)),
"30": (30, Fraction(1, 30)),
"59.94": (60, Fraction(1001, 60_000)),
"60": (60, Fraction(1, 60))
}
if self.timecode_fps in frame_rates.keys():
return frame_rates[self.timecode_fps]
else:
raise ValueError("Unrecognized TC rate (%s)" % self.timecode_format)
class TrackDescriptor:
name: str
comments: str
user_delay_samples: int
state: List[str]
plugins: List[str]
clips: List["TrackClipDescriptor"]
def __init__(self, **kwargs):
self.name = kwargs['name']
self.comments = kwargs['comments']
self.user_delay_samples = kwargs['user_delay_samples']
self.state = kwargs['state']
self.plugins = kwargs['plugins']
self.clips = kwargs['clips']
class FileDescriptor(dict):
pass
class TrackClipDescriptor:
channel: int
event: int
clip_name: str
start_timecode: str
finish_timecode: str
duration: str
timestamp: str
state: str
def __init__(self, **kwargs):
self.channel = kwargs['channel']
self.event = kwargs['event']
self.clip_name = kwargs['clip_name']
self.start_timecode = kwargs['start_time']
self.finish_timecode = kwargs['finish_time']
self.duration = kwargs['duration']
self.timestamp = kwargs['timestamp']
self.state = kwargs['state']
class ClipDescriptor(dict):
pass
class PluginDescriptor(dict):
pass
class MarkerDescriptor:
number: int
location: str
time_reference: int
units: str
name: str
comments: str
def __init__(self, **kwargs):
self.number = kwargs['number']
self.location = kwargs['location']
self.time_reference = kwargs['time_reference']
self.units = kwargs['units']
self.name = kwargs['name']
self.comments = kwargs['comments']

View File

@@ -0,0 +1,172 @@
from parsimonious.nodes import NodeVisitor
from .doc_entity import SessionDescriptor, HeaderDescriptor, TrackDescriptor, FileDescriptor, \
TrackClipDescriptor, ClipDescriptor, PluginDescriptor, MarkerDescriptor
def parse_document(path: str) -> SessionDescriptor:
"""
Parse a Pro Tools text export.
:param path: path to a file
:return: the session descriptor
"""
from .ptuls_grammar import protools_text_export_grammar
with open(path, 'r') as f:
ast = protools_text_export_grammar.parse(f.read())
return DocParserVisitor().visit(ast)
class DocParserVisitor(NodeVisitor):
@staticmethod
def visit_document(_, visited_children) -> SessionDescriptor:
files = next(iter(visited_children[1]), None)
clips = next(iter(visited_children[2]), None)
plugins = next(iter(visited_children[3]), None)
tracks = next(iter(visited_children[4]), None)
markers = next(iter(visited_children[5]), None)
return SessionDescriptor(header=visited_children[0],
files=files,
clips=clips,
plugins=plugins,
tracks=tracks,
markers=markers)
@staticmethod
def visit_header(_, visited_children):
tc_drop = False
for _ in visited_children[20]:
tc_drop = True
return HeaderDescriptor(session_name=visited_children[2],
sample_rate=visited_children[6],
bit_depth=visited_children[10],
start_timecode=visited_children[15],
timecode_format=visited_children[19],
timecode_drop_frame=tc_drop,
count_audio_tracks=visited_children[25],
count_clips=visited_children[29],
count_files=visited_children[33])
@staticmethod
def visit_files_section(_, visited_children):
return list(map(lambda child: FileDescriptor(filename=child[0], path=child[2]), visited_children[2]))
@staticmethod
def visit_clips_section(_, visited_children):
channel = next(iter(visited_children[2][3]), 1)
return list(map(lambda child: ClipDescriptor(clip_name=child[0], file=child[2], channel=channel),
visited_children[2]))
@staticmethod
def visit_plugin_listing(_, visited_children):
return list(map(lambda child: PluginDescriptor(manufacturer=child[0],
plugin_name=child[2],
version=child[4],
format=child[6],
stems=child[8],
count_instances=child[10]),
visited_children[2]))
@staticmethod
def visit_track_block(_, visited_children):
track_header, track_clip_list = visited_children
clips = []
for clip in track_clip_list:
if clip[0] is not None:
clips.append(clip[0])
plugins = []
for plugin_opt in track_header[16]:
for plugin in plugin_opt[1]:
plugins.append(plugin[1])
return TrackDescriptor(
name=track_header[2],
comments=track_header[6],
user_delay_samples=track_header[10],
state=track_header[14],
plugins=plugins,
clips=clips
)
@staticmethod
def visit_frame_rate(node, _):
return node.text
@staticmethod
def visit_track_listing(_, visited_children):
return visited_children[1]
@staticmethod
def visit_track_clip_entry(_, visited_children):
timestamp = None
if isinstance(visited_children[14], list):
timestamp = visited_children[14][0][0]
return TrackClipDescriptor(channel=visited_children[0],
event=visited_children[3],
clip_name=visited_children[6],
start_time=visited_children[8],
finish_time=visited_children[10],
duration=visited_children[12],
timestamp=timestamp,
state=visited_children[15])
@staticmethod
def visit_track_state_list(_, visited_children):
states = []
for next_state in visited_children:
states.append(next_state[0][0].text)
return states
@staticmethod
def visit_track_clip_state(node, _):
return node.text
@staticmethod
def visit_markers_listing(_, visited_children):
markers = []
for marker in visited_children[2]:
markers.append(marker)
return markers
@staticmethod
def visit_marker_record(_, visited_children):
return MarkerDescriptor(number=visited_children[0],
location=visited_children[3],
time_reference=visited_children[5],
units=visited_children[8],
name=visited_children[10],
comments=visited_children[12])
@staticmethod
def visit_formatted_clip_name(_, visited_children):
return visited_children[1].text
@staticmethod
def visit_string_value(node, _):
return node.text.strip(" ")
@staticmethod
def visit_integer_value(node, _):
return int(node.text)
# def visit_timecode_value(self, node, visited_children):
# return node.text.strip(" ")
@staticmethod
def visit_float_value(node, _):
return float(node.text)
def visit_block_ending(self, node, visited_children):
pass
def generic_visit(self, node, visited_children):
""" The generic visit method. """
return visited_children or node

View File

@@ -0,0 +1 @@
from dataclasses import dataclass

View File

@@ -7,11 +7,12 @@ protools_text_export_grammar = Grammar(
"SAMPLE RATE:" fs float_value rs "SAMPLE RATE:" fs float_value rs
"BIT DEPTH:" fs integer_value "-bit" rs "BIT DEPTH:" fs integer_value "-bit" rs
"SESSION START TIMECODE:" fs string_value rs "SESSION START TIMECODE:" fs string_value rs
"TIMECODE FORMAT:" fs float_value " Drop"? " Frame" rs "TIMECODE FORMAT:" fs frame_rate " Drop"? " Frame" rs
"# OF AUDIO TRACKS:" fs integer_value rs "# OF AUDIO TRACKS:" fs integer_value rs
"# OF AUDIO CLIPS:" fs integer_value rs "# OF AUDIO CLIPS:" fs integer_value rs
"# OF AUDIO FILES:" fs integer_value rs block_ending "# OF AUDIO FILES:" fs integer_value rs block_ending
frame_rate = ("60" / "59.94" / "30" / "29.97" / "25" / "24" / "23.976")
files_section = files_header files_column_header file_record* block_ending files_section = files_header files_column_header file_record* block_ending
files_header = "F I L E S I N S E S S I O N" rs files_header = "F I L E S I N S E S S I O N" rs
files_column_header = "Filename" isp fs "Location" rs files_column_header = "Filename" isp fs "Location" rs
@@ -68,6 +69,6 @@ protools_text_export_grammar = Grammar(
block_ending = rs rs block_ending = rs rs
string_value = ~"[^\t\n]*" string_value = ~"[^\t\n]*"
integer_value = ~"\d+" integer_value = ~"\d+"
float_value = ~"\d+(\.\d+)" float_value = ~"\d+(\.\d+)?"
isp = ~"[^\d\t\n]*" isp = ~"[^\d\t\n]*"
""") """)

View File

@@ -0,0 +1,189 @@
import sys
from collections import namedtuple
from fractions import Fraction
from typing import Iterator, Tuple, Callable, Generator, Dict, List
import ptulsconv.docparser.doc_entity as doc_entity
from .tagged_string_parser_visitor import parse_tags, TagPreModes
from dataclasses import dataclass
@dataclass
class Event:
clip_name: str
track_name: str
session_name: str
tags: Dict[str, str]
start: Fraction
finish: Fraction
class TagCompiler:
Intermediate = namedtuple('Intermediate', 'track_content track_tags track_comment_tags '
'clip_content clip_tags clip_tag_mode start finish')
session: doc_entity.SessionDescriptor
def compile_all_time_spans(self) -> List[Tuple[str, str, Fraction, Fraction]]:
ret_list = list()
for element in self.parse_data():
if element.clip_tag_mode == TagPreModes.TIMESPAN:
for k in element.clip_tags.keys():
ret_list.append((k, element.clip_tags[k], element.start, element.finish))
return ret_list
def compile_tag_list(self) -> Dict[str, List[str]]:
tags_dict = dict()
def update_tags_dict(other_dict: dict):
for k in other_dict.keys():
if k not in tags_dict.keys():
tags_dict[k] = set()
tags_dict[k].add(other_dict[k])
for parsed in self.parse_data():
update_tags_dict(parsed.clip_tags)
update_tags_dict(parsed.track_tags)
update_tags_dict(parsed.track_comment_tags)
session_tags = parse_tags(self.session.header.session_name).tag_dict
update_tags_dict(session_tags)
for m in self.session.markers:
marker_tags = parse_tags(m.name).tag_dict
marker_comment_tags = parse_tags(m.comments).tag_dict
update_tags_dict(marker_tags)
update_tags_dict(marker_comment_tags)
return tags_dict
def compile_events(self) -> Iterator[Event]:
step0 = self.parse_data()
step1 = self.apply_appends(step0)
step2 = self.collect_time_spans(step1)
step3 = self.apply_tags(step2)
for datum in step3:
yield Event(clip_name=datum[0], track_name=datum[1], session_name=datum[2],
tags=datum[3], start=datum[4], finish=datum[5])
def _marker_tags(self, at):
retval = dict()
applicable = [(m, t) for (m, t) in self.session.markers_timed() if t <= at]
for marker, time in sorted(applicable, key=lambda x: x[1]):
retval.update(parse_tags(marker.comments).tag_dict)
retval.update(parse_tags(marker.name).tag_dict)
return retval
@staticmethod
def _coalesce_tags(clip_tags: dict, track_tags: dict,
track_comment_tags: dict,
timespan_tags: dict,
marker_tags: dict, session_tags: dict):
effective_tags = dict()
effective_tags.update(session_tags)
effective_tags.update(marker_tags)
effective_tags.update(timespan_tags)
effective_tags.update(track_comment_tags)
effective_tags.update(track_tags)
effective_tags.update(clip_tags)
return effective_tags
def parse_data(self) -> Iterator[Intermediate]:
for track, clip, start, finish, _ in self.session.track_clips_timed():
if clip.state == 'Muted':
continue
track_parsed = parse_tags(track.name)
track_comments_parsed = parse_tags(track.comments)
clip_parsed = parse_tags(clip.clip_name)
yield TagCompiler.Intermediate(track_content=track_parsed.content,
track_tags=track_parsed.tag_dict,
track_comment_tags=track_comments_parsed.tag_dict,
clip_content=clip_parsed.content,
clip_tags=clip_parsed.tag_dict,
clip_tag_mode=clip_parsed.mode,
start=start, finish=finish)
@staticmethod
def apply_appends(parsed: Iterator[Intermediate]) -> Iterator[Intermediate]:
def should_append(a, b):
return b.clip_tag_mode == TagPreModes.APPEND and b.start >= a.finish
def do_append(a, b):
merged_tags = dict(a.clip_tags)
merged_tags.update(b.clip_tags)
return TagCompiler.Intermediate(track_content=a.track_content,
track_tags=a.track_tags,
track_comment_tags=a.track_comment_tags,
clip_content=a.clip_content + ' ' + b.clip_content,
clip_tags=merged_tags, clip_tag_mode=a.clip_tag_mode,
start=a.start, finish=b.finish)
yield from apply_appends(parsed, should_append, do_append)
@staticmethod
def collect_time_spans(parsed: Iterator[Intermediate]) -> \
Iterator[Tuple[Intermediate, Tuple[dict, Fraction, Fraction]]]:
time_spans = list()
for item in parsed:
if item.clip_tag_mode == TagPreModes.TIMESPAN:
time_spans.append((item.clip_tags, item.start, item.finish))
else:
yield item, list(time_spans)
@staticmethod
def _time_span_tags(at_time: Fraction, applicable_spans) -> dict:
retval = dict()
for tags in reversed([a[0] for a in applicable_spans if a[1] <= at_time <= a[2]]):
retval.update(tags)
return retval
def apply_tags(self, parsed_with_time_spans) -> Iterator[Tuple[str, str, str, dict, Fraction, Fraction]]:
session_parsed = parse_tags(self.session.header.session_name)
for event, time_spans in parsed_with_time_spans:
event: 'TagCompiler.Intermediate'
marker_tags = self._marker_tags(event.start)
time_span_tags = self._time_span_tags(event.start, time_spans)
tags = self._coalesce_tags(clip_tags=event.clip_tags,
track_tags=event.track_tags,
track_comment_tags=event.track_comment_tags,
timespan_tags=time_span_tags,
marker_tags=marker_tags,
session_tags=session_parsed.tag_dict)
yield event.clip_content, event.track_content, session_parsed.content, tags, event.start, event.finish
def apply_appends(source: Iterator,
should_append: Callable,
do_append: Callable) -> Generator:
"""
:param source:
:param should_append: Called with two variables a and b, your
function should return true if b should be
appended to a
:param do_append: Called with two variables a and b, your function
should return
:returns: A Generator
"""
this_element = next(source)
for element in source:
if should_append(this_element, element):
this_element = do_append(this_element, element)
else:
yield this_element
this_element = element
yield this_element

View File

@@ -0,0 +1,83 @@
import sys
from enum import Enum
from typing import Optional, Callable, Any, List
class TagMapping:
class ContentSource(Enum):
Session = 1,
Track = 2,
Clip = 3,
source: str
alternate_source: Optional[ContentSource]
formatter: Callable[[str], Any]
@staticmethod
def print_rules(for_type: object, output=sys.stdout):
format_str = "%-20s | %-20s | %-25s"
hr = "%s+%s+%s" % ("-" * 21, "-" * 23, "-" * 26)
print("Tag mapping for %s" % for_type.__name__)
print(hr)
print(format_str % ("Tag Source", "Target", "Type"),
file=output)
print(hr)
for rule in for_type.tag_mapping:
t = for_type.__annotations__[rule.target]
print(format_str % (rule.source, rule.target, t),
file=output)
if rule.alternate_source is TagMapping.ContentSource.Session:
print(format_str % (" - (Session Name)", rule.target, t),
file=output)
elif rule.alternate_source is TagMapping.ContentSource.Track:
print(format_str % (" - (Track Name)", rule.target, t),
file=output)
elif rule.alternate_source is TagMapping.ContentSource.Clip:
print(format_str % (" - (Clip Name)", rule.target, t),
file=output)
@staticmethod
def apply_rules(rules: List['TagMapping'],
tags: dict,
clip_content: str,
track_content: str,
session_content: str,
to: object):
done = set()
for rule in rules:
if rule.target in done:
continue
if rule.apply(tags, clip_content, track_content, session_content, to):
done.update(rule.target)
def __init__(self, source: str,
target: str,
alt: Optional[ContentSource] = None,
formatter=None):
self.source = source
self.target = target
self.alternate_source = alt
self.formatter = formatter or (lambda x: x)
def apply(self, tags: dict,
clip_content: str,
track_content: str,
session_content: str, to: object) -> bool:
new_value = None
if self.source in tags.keys():
new_value = tags[self.source]
elif self.alternate_source == TagMapping.ContentSource.Session:
new_value = session_content
elif self.alternate_source == TagMapping.ContentSource.Track:
new_value = track_content
elif self.alternate_source == TagMapping.ContentSource.Clip:
new_value = clip_content
if new_value is not None:
setattr(to, self.target, self.formatter(new_value))
return True
else:
return False

View File

@@ -0,0 +1,97 @@
from parsimonious import NodeVisitor, Grammar
from typing import Dict, Optional
from enum import Enum
class TagPreModes(Enum):
NORMAL = 'Normal'
APPEND = 'Append'
TIMESPAN = 'Timespan'
tag_grammar = Grammar(
r"""
document = modifier? line? word_sep? tag_list?
line = word (word_sep word)*
tag_list = tag*
tag = key_tag / short_tag / full_text_tag / tag_junk
key_tag = "[" key "]" word_sep?
short_tag = "$" key "=" word word_sep?
full_text_tag = "{" key "=" value "}" word_sep?
key = ~"[A-Za-z][A-Za-z0-9_]*"
value = ~"[^}]+"
tag_junk = word word_sep?
word = ~"[^ \[\{\$][^ ]*"
word_sep = ~" +"
modifier = ("@" / "&") word_sep?
"""
)
def parse_tags(prompt) -> "TaggedStringResult":
ast = tag_grammar.parse(prompt)
return TagListVisitor().visit(ast)
class TaggedStringResult:
content: Optional[str]
tag_dict: Optional[Dict[str, str]]
mode: TagPreModes
def __init__(self, content, tag_dict, mode):
self.content = content
self.tag_dict = tag_dict
self.mode = mode
class TagListVisitor(NodeVisitor):
@staticmethod
def visit_document(_, visited_children) -> TaggedStringResult:
modifier_opt, line_opt, _, tag_list_opt = visited_children
return TaggedStringResult(content=next(iter(line_opt), None),
tag_dict=next(iter(tag_list_opt), None),
mode=TagPreModes(next(iter(modifier_opt), 'Normal'))
)
@staticmethod
def visit_line(node, _):
return str.strip(node.text, " ")
@staticmethod
def visit_modifier(node, _):
if node.text.startswith('@'):
return TagPreModes.TIMESPAN
elif node.text.startswith('&'):
return TagPreModes.APPEND
else:
return TagPreModes.NORMAL
@staticmethod
def visit_tag_list(_, visited_children):
retdict = dict()
for child in visited_children:
if child[0] is not None:
k, v = child[0]
retdict[k] = v
return retdict
@staticmethod
def visit_key_tag(_, children):
return children[1].text, children[1].text
@staticmethod
def visit_short_tag(_, children):
return children[1].text, children[3].text
@staticmethod
def visit_full_text_tag(_, children):
return children[1].text, children[3].text
@staticmethod
def visit_tag_junk(_node, _visited_children):
return None
def generic_visit(self, node, visited_children) -> object:
return visited_children or node

14
ptulsconv/movie_export.py Normal file
View File

@@ -0,0 +1,14 @@
import ffmpeg # ffmpeg-python
# TODO: Implement movie export
# def create_movie(event):
# start = event['Movie.Start_Offset_Seconds']
# duration = event['PT.Clip.Finish_Seconds'] - event['PT.Clip.Start_Seconds']
# input_movie = event['Movie.Filename']
# print("Will make movie starting at {}, dur {} from movie {}".format(start, duration, input_movie))
#
#
# def export_movies(events):
# for event in events:
# create_movie(event)

335
ptulsconv/pdf/__init__.py Normal file
View File

@@ -0,0 +1,335 @@
import datetime
from reportlab.pdfbase.pdfmetrics import (getAscent, getDescent)
from reportlab.lib.units import inch
from reportlab.pdfgen import canvas
from reportlab.platypus.doctemplate import BaseDocTemplate, PageTemplate
from reportlab.platypus.frames import Frame
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
# TODO: A Generic report useful for spotting
# TODO: A report useful for M&E mixer's notes
# This is from https://code.activestate.com/recipes/576832/ for
# generating page count messages
class ReportCanvas(canvas.Canvas):
def __init__(self, *args, **kwargs):
canvas.Canvas.__init__(self, *args, **kwargs)
self._saved_page_states = []
self._report_date = datetime.datetime.now()
def showPage(self):
self._saved_page_states.append(dict(self.__dict__))
self._startPage()
def save(self):
"""add page info to each page (page x of y)"""
num_pages = len(self._saved_page_states)
for state in self._saved_page_states:
self.__dict__.update(state)
self.draw_page_number(num_pages)
canvas.Canvas.showPage(self)
canvas.Canvas.save(self)
def draw_page_number(self, page_count):
self.saveState()
self.setFont("Futura", 10)
self.drawString(0.5 * inch, 0.5 * inch, "Page %d of %d" % (self._pageNumber, page_count))
right_edge = self._pagesize[0] - 0.5 * inch
self.drawRightString(right_edge, 0.5 * inch, self._report_date.strftime("%m/%d/%Y %H:%M"))
top_line = self.beginPath()
top_line.moveTo(0.5 * inch, 0.75 * inch)
top_line.lineTo(right_edge, 0.75 * inch)
self.setLineWidth(0.5)
self.drawPath(top_line)
self.restoreState()
class ADRDocTemplate(BaseDocTemplate):
def build(self, flowables, filename=None, canvasmaker=ReportCanvas):
BaseDocTemplate.build(self, flowables, filename, canvasmaker)
def make_doc_template(page_size, filename, document_title,
title: str,
supervisor: str,
document_header: str,
client: str,
document_subheader: str,
left_margin=0.5 * inch) -> ADRDocTemplate:
right_margin = top_margin = bottom_margin = 0.5 * inch
page_box = GRect(0., 0., page_size[0], page_size[1])
_, page_box = page_box.split_x(left_margin, direction='l')
_, page_box = page_box.split_x(right_margin, direction='r')
_, page_box = page_box.split_y(bottom_margin, direction='u')
_, page_box = page_box.split_y(top_margin, direction='d')
footer_box, page_box = page_box.split_y(0.25 * inch, direction='u')
header_box, page_box = page_box.split_y(0.75 * inch, direction='d')
title_box, report_box = header_box.split_x(3.5 * inch, direction='r')
page_template = PageTemplate(id="Main",
frames=[Frame(page_box.min_x, page_box.min_y, page_box.width, page_box.height)],
onPage=lambda c, _: draw_header_footer(c, report_box, title_box, footer_box,
title=title, supervisor=supervisor,
document_subheader=document_subheader,
client=client,
doc_title=document_header))
pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
doc = ADRDocTemplate(filename,
title=document_title,
author=supervisor,
pagesize=page_size,
leftMargin=left_margin, rightMargin=right_margin,
topMargin=top_margin, bottomMargin=bottom_margin)
doc.addPageTemplates([page_template])
return doc
def time_format(mins, zero_str="-"):
if mins == 0. and zero_str is not None:
return zero_str
elif mins < 60.:
return "%im" % round(mins)
else:
m = round(mins)
hh, mm = divmod(m, 60)
return "%i:%02i" % (hh, mm)
def draw_header_footer(a_canvas: ReportCanvas, left_box, right_box, footer_box, title: str, supervisor: str,
document_subheader: str, client: str, doc_title=""):
(_supervisor_box, client_box,), title_box = right_box.divide_y([16., 16., ])
title_box.draw_text_cell(a_canvas, title, "Futura", 18, inset_y=2., inset_x=5.)
client_box.draw_text_cell(a_canvas, client, "Futura", 11, inset_y=2., inset_x=5.)
a_canvas.saveState()
a_canvas.setLineWidth(0.5)
tline = a_canvas.beginPath()
tline.moveTo(left_box.min_x, right_box.min_y)
tline.lineTo(right_box.max_x, right_box.min_y)
a_canvas.drawPath(tline)
tline2 = a_canvas.beginPath()
tline2.moveTo(right_box.min_x, left_box.min_y)
tline2.lineTo(right_box.min_x, left_box.max_y)
a_canvas.drawPath(tline2)
a_canvas.restoreState()
(doc_title_cell, spotting_version_cell,), _ = left_box.divide_y([18., 14], direction='d')
doc_title_cell.draw_text_cell(a_canvas, doc_title, 'Futura', 14., inset_y=2.)
if document_subheader is not None:
spotting_version_cell.draw_text_cell(a_canvas, document_subheader, 'Futura', 12., inset_y=2.)
if supervisor is not None:
a_canvas.setFont('Futura', 11.)
a_canvas.drawCentredString(footer_box.min_x + footer_box.width / 2., footer_box.min_y, supervisor)
class GRect:
def __init__(self, x, y, width, height, debug_name=None):
self.x = x
self.y = y
self.width = width
self.height = height
self.debug_name = debug_name
self.normalize()
@property
def min_x(self):
return self.x
@property
def min_y(self):
return self.y
@property
def max_x(self):
return self.x + self.width
@property
def max_y(self):
return self.y + self.height
@property
def center_x(self):
return self.x + self.width / 2
@property
def center_y(self):
return self.y + self.height / 2
def normalize(self):
if self.width < 0.:
self.width = abs(self.width)
self.x = self.x - self.width
if self.height < 0.:
self.height = abs(self.height)
self.y = self.y - self.height
def split_x(self, at, direction='l'):
if at >= self.width:
return None, self
elif at <= 0:
return self, None
else:
if direction == 'l':
return (GRect(self.min_x, self.min_y, at, self.height),
GRect(self.min_x + at, self.y, self.width - at, self.height))
else:
return (GRect(self.max_x - at, self.y, at, self.height),
GRect(self.min_x, self.y, self.width - at, self.height))
def split_y(self, at, direction='u'):
if at >= self.height:
return None, self
elif at <= 0:
return self, None
else:
if direction == 'u':
return (GRect(self.x, self.y, self.width, at),
GRect(self.x, self.y + at, self.width, self.height - at))
else:
return (GRect(self.x, self.max_y - at, self.width, at),
GRect(self.x, self.y, self.width, self.height - at))
def inset_xy(self, dx, dy):
return GRect(self.x + dx, self.y + dy, self.width - dx * 2, self.height - dy * 2)
def inset(self, d):
return self.inset_xy(d, d)
def __repr__(self):
return "<GRect x=%f y=%f width=%f height=%f>" % (self.x, self.y, self.width, self.height)
def divide_x(self, x_list, direction='l'):
ret_list = list()
rem = self
for item in x_list:
s, rem = rem.split_x(item, direction)
ret_list.append(s)
return ret_list, rem
def divide_y(self, y_list, direction='u'):
ret_list = list()
rem = self
for item in y_list:
s, rem = rem.split_y(item, direction)
ret_list.append(s)
return ret_list, rem
def draw_debug(self, a_canvas):
a_canvas.saveState()
a_canvas.setFont("Courier", 8)
a_canvas.rect(self.x, self.y, self.width, self.height)
a_canvas.drawString(self.x, self.y, self.debug_name or self.__repr__())
a_canvas.restoreState()
def draw_border(self, a_canvas, edge):
def draw_border_impl(en):
if en == 'min_x':
coordinates = ((self.min_x, self.min_y), (self.min_x, self.max_y))
elif en == 'max_x':
coordinates = ((self.max_x, self.min_y), (self.max_x, self.max_y))
elif en == 'min_y':
coordinates = ((self.min_x, self.min_y), (self.max_x, self.min_y))
elif en == 'max_y':
coordinates = ((self.min_x, self.max_y), (self.max_x, self.max_y))
else:
return
s = a_canvas.beginPath()
s.moveTo(*coordinates[0])
s.lineTo(*coordinates[1])
a_canvas.drawPath(s)
if type(edge) is str:
edge = [edge]
for e in edge:
draw_border_impl(e)
def draw_text_cell(self, a_canvas, text, font_name, font_size,
vertical_align='t', force_baseline=None, inset_x=0.,
inset_y=0., draw_baseline=False):
if text is None:
return
a_canvas.saveState()
inset_rect = self.inset_xy(inset_x, inset_y)
if vertical_align == 'm':
y = inset_rect.center_y - getAscent(font_name, font_size) / 2.
elif vertical_align == 't':
y = inset_rect.max_y - getAscent(font_name, font_size)
else:
y = inset_rect.min_y - getDescent(font_name, font_size)
if force_baseline is not None:
y = self.min_y + force_baseline
cp = a_canvas.beginPath()
cp.rect(self.min_x, self.min_y, self.width, self.height)
a_canvas.clipPath(cp, stroke=0, fill=0)
a_canvas.setFont(font_name, font_size)
tx = a_canvas.beginText()
tx.setTextOrigin(inset_rect.min_x, y)
tx.textLine(text)
a_canvas.drawText(tx)
if draw_baseline:
a_canvas.setDash([3.0, 1.0, 2.0, 1.0])
a_canvas.setLineWidth(0.5)
bl = a_canvas.beginPath()
bl.moveTo(inset_rect.min_x, y - 1.)
bl.lineTo(inset_rect.max_x, y - 1.)
a_canvas.drawPath(bl)
a_canvas.restoreState()
def draw_flowable(self, a_canvas, flowable, inset_x=0.,
inset_y=0., draw_baselines=False):
a_canvas.saveState()
inset_rect = self.inset_xy(inset_x, inset_y)
cp = a_canvas.beginPath()
cp.rect(self.min_x, self.min_y, self.width, self.height)
a_canvas.clipPath(cp, stroke=0, fill=0)
w, h = flowable.wrap(inset_rect.width, inset_rect.height)
flowable.drawOn(a_canvas, inset_rect.x, inset_rect.max_y - h)
if draw_baselines:
a_canvas.setDash([3.0, 1.0, 2.0, 1.0])
a_canvas.setLineWidth(0.5)
leading = flowable.style.leading
y = inset_rect.max_y - flowable.style.fontSize - 1.
while y > inset_rect.min_x:
bl = a_canvas.beginPath()
bl.moveTo(inset_rect.min_x, y)
bl.lineTo(inset_rect.max_x, y)
a_canvas.drawPath(bl)
y = y - leading
a_canvas.restoreState()

View File

@@ -0,0 +1,57 @@
from fractions import Fraction
from typing import Tuple, List
from reportlab.lib.pagesizes import portrait, letter
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import inch
from reportlab.platypus import Paragraph, Table, Spacer
from ptulsconv.broadcast_timecode import TimecodeFormat
from ptulsconv.pdf import make_doc_template
# TODO: A Continuity
def table_for_scene(scene, tc_format):
scene_style = getSampleStyleSheet()['Normal']
scene_style.fontName = 'Futura'
scene_style.leftIndent = 0.
scene_style.leftPadding = 0.
scene_style.spaceAfter = 18.
tc_data = "<em>%s</em><br />%s" % (tc_format.seconds_to_smpte(scene[2]), tc_format.seconds_to_smpte(scene[3]))
row = [
Paragraph(tc_data, scene_style),
Paragraph(scene[1], scene_style),
]
style = [('VALIGN', (0, 0), (-1, -1), 'TOP'),
('LEFTPADDING', (0, 0), (0, 0), 0.0),
('BOTTOMPADDING', (0, 0), (-1, -1), 12.),
('FONTNAME', (0, 0), (-1, -1), 'Futura')]
return Table(data=[row], style=style, colWidths=[1.0 * inch, 6.5 * inch])
def output_report(scenes: List[Tuple[str, str, Fraction, Fraction]],
tc_display_format: TimecodeFormat,
title: str, client: str, supervisor):
filename = "%s Continuity.pdf" % title
document_header = "Continuity"
doc = make_doc_template(page_size=portrait(letter),
filename=filename,
document_title="Continuity",
title=title,
client=client,
document_subheader="",
supervisor=supervisor,
document_header=document_header,
left_margin=0.5 * inch)
story = list()
# story.append(Spacer(height=0.5 * inch, width=1.))
for scene in scenes:
story.append(table_for_scene(scene, tc_display_format))
doc.build(story)

243
ptulsconv/pdf/line_count.py Normal file
View File

@@ -0,0 +1,243 @@
from typing import List, Optional
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter, portrait
from reportlab.lib import colors
from reportlab.platypus import Table, Paragraph, Spacer
from reportlab.lib.styles import getSampleStyleSheet
from .__init__ import time_format, make_doc_template
from ..docparser.adr_entity import ADRLine
def build_columns(lines: List[ADRLine], reel_list: Optional[List[str]], show_priorities=False, include_omitted=False):
columns = list()
reel_numbers = reel_list or sorted(set([x.reel for x in lines if x.reel is not None]))
num_column_width = 15. / 32. * inch
columns.append({
'heading': '#',
'value_getter': lambda recs: recs[0].character_id,
'value_getter2': lambda recs: "",
'style_getter': lambda col_index: [],
'width': 0.375 * inch,
'summarize': False
})
columns.append({
'heading': 'Role',
'value_getter': lambda recs: recs[0].character_name,
'value_getter2': lambda recs: recs[0].actor_name or "",
'style_getter': lambda col_index: [('LINEAFTER', (col_index, 0), (col_index, -1), 1.0, colors.black)],
'width': 1.75 * inch,
'summarize': False
})
columns.append({
'heading': 'TV',
'value_getter': lambda recs: len([r for r in recs if r.tv]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if r.tv])),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'),
('LINEBEFORE', (col_index, 0), (col_index, -1), 1., colors.black),
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)],
'width': num_column_width
})
columns.append({
'heading': 'Opt',
'value_getter': lambda recs: len([r for r in recs if r.optional]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if r.optional])),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'),
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)],
'width': num_column_width
})
columns.append({
'heading': 'Eff',
'value_getter': lambda recs: len([r for r in recs if r.effort]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if r.effort])),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
'width': num_column_width
})
columns.append({
'heading': '',
'value_getter': lambda _: '',
'value_getter2': lambda _: '',
'style_getter': lambda col_index: [
('LINEBEFORE', (col_index, 0), (col_index, -1), 1., colors.black),
('LINEAFTER', (col_index, 0), (col_index, -1), 1., colors.black),
],
'width': 2.
})
if len(reel_numbers) > 0:
# columns.append({
# 'heading': 'RX',
# 'value_getter': lambda recs: blank_len([r for r in recs if 'Reel' not in r.keys()]),
# 'value_getter2': lambda recs: time_format(sum([r.get('Time Budget Mins', 0.) for r in recs
# if 'Reel' not in r.keys()])),
# 'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
# 'width': num_column_width
# })
for n in reel_numbers:
columns.append({
'heading': n,
'value_getter': lambda recs, n1=n: len([r for r in recs if r.reel == n1]),
'value_getter2': lambda recs, n1=n: time_format(sum([r.time_budget_mins or 0. for r
in recs if r.reel == n1])),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER'),
('LINEAFTER', (col_index, 0), (col_index, -1), .5, colors.gray)],
'width': num_column_width
})
if show_priorities:
for n in range(1, 6,):
columns.append({
'heading': 'P%i' % n,
'value_getter': lambda recs: len([r for r in recs if r.priority == n]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if r.priority == n])),
'style_getter': lambda col_index: [],
'width': num_column_width
})
columns.append({
'heading': '>P5',
'value_getter': lambda recs: len([r for r in recs if (r.priority or 5) > 5]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if (r.priority or 5) > 5])),
'style_getter': lambda col_index: [],
'width': num_column_width
})
if include_omitted:
columns.append({
'heading': 'Omit',
'value_getter': lambda recs: len([r for r in recs if r.omitted]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if r.omitted])),
'style_getter': lambda col_index: [('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
'width': num_column_width
})
columns.append({
'heading': 'Total',
'value_getter': lambda recs: len([r for r in recs if not r.omitted]),
'value_getter2': lambda recs: time_format(sum([r.time_budget_mins or 0.
for r in recs if not r.omitted]), zero_str=None),
'style_getter': lambda col_index: [('LINEBEFORE', (col_index, 0), (col_index, -1), 1.0, colors.black),
('ALIGN', (col_index, 0), (col_index, -1), 'CENTER')],
'width': 0.5 * inch
})
return columns
def populate_columns(lines: List[ADRLine], columns, include_omitted, _page_size):
data = list()
styles = list()
columns_widths = list()
sorted_character_numbers = sorted(set([x.character_id for x in lines]),
key=lambda x: str(x))
# construct column styles
for i, c in enumerate(columns):
styles.extend(c['style_getter'](i))
columns_widths.append(c['width'])
data.append(list(map(lambda x: x['heading'], columns)))
if not include_omitted:
lines = [x for x in lines if not x.omitted]
for n in sorted_character_numbers:
char_records = [x for x in lines if x.character_id == n]
row_data = list()
row_data2 = list()
for col in columns:
row1_index = len(data)
row2_index = row1_index + 1
row_data.append(col['value_getter'](list(char_records)))
row_data2.append(col['value_getter2'](list(char_records)))
styles.extend([('TEXTCOLOR', (0, row2_index), (-1, row2_index), colors.red),
('LINEBELOW', (0, row2_index), (-1, row2_index), 0.5, colors.black)])
data.append(row_data)
data.append(row_data2)
summary_row1 = list()
summary_row2 = list()
row1_index = len(data)
for col in columns:
if col.get('summarize', True):
summary_row1.append(col['value_getter'](lines))
summary_row2.append(col['value_getter2'](lines))
else:
summary_row1.append("")
summary_row2.append("")
styles.append(('LINEABOVE', (0, row1_index), (-1, row1_index), 2.0, colors.black))
data.append(summary_row1)
data.append(summary_row2)
return data, styles, columns_widths
# def build_header(column_widths):
# pass
def output_report(lines: List[ADRLine], reel_list: List[str], include_omitted=False,
page_size=portrait(letter)):
columns = build_columns(lines, include_omitted=include_omitted, reel_list=reel_list)
data, style, columns_widths = populate_columns(lines, columns, include_omitted, page_size)
style.append(('FONTNAME', (0, 0), (-1, -1), "Futura"))
style.append(('FONTSIZE', (0, 0), (-1, -1), 9.))
style.append(('LINEBELOW', (0, 0), (-1, 0), 1.0, colors.black))
# style.append(('LINEBELOW', (0, 1), (-1, -1), 0.25, colors.gray))
pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
title = "%s Line Count" % lines[0].title
filename = title + '.pdf'
doc = make_doc_template(page_size=page_size, filename=filename,
document_title=title, title=lines[0].title,
document_subheader=lines[0].spot,
client=lines[0].client,
supervisor=lines[0].supervisor,
document_header='Line Count')
# header_data, header_style, header_widths = build_header(columns_widths)
# header_table = Table(data=header_data, style=header_style, colWidths=header_widths)
table = Table(data=data, style=style, colWidths=columns_widths)
story = [Spacer(height=0.5 * inch, width=1.), table]
style = getSampleStyleSheet()['Normal']
style.fontName = 'Futura'
style.fontSize = 12.
style.spaceBefore = 16.
style.spaceAfter = 16.
omitted_count = len([x for x in lines if x.omitted])
if not include_omitted and omitted_count > 0:
story.append(Paragraph("* %i Omitted lines are excluded." % omitted_count, style))
doc.build(story)

View File

@@ -0,0 +1,6 @@
# TODO: Complete Recordist Log
def output_report(records):
# order by start
pass

View File

@@ -0,0 +1,146 @@
# -*- coding: utf-8 -*-
from .__init__ import time_format, make_doc_template
from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter, portrait
from reportlab.platypus import Paragraph, Spacer, KeepTogether, Table
from reportlab.lib.styles import getSampleStyleSheet
from typing import List
from ptulsconv.docparser.adr_entity import ADRLine
from ptulsconv.broadcast_timecode import TimecodeFormat
def build_aux_data_field(line: ADRLine):
entries = list()
if line.reason is not None:
entries.append("Reason: " + line.reason)
if line.note is not None:
entries.append("Note: " + line.note)
if line.requested_by is not None:
entries.append("Requested by: " + line.requested_by)
if line.shot is not None:
entries.append("Shot: " + line.shot)
fg_color = 'white'
tag_field = ""
if line.effort:
bg_color = 'red'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "EFF")
elif line.tv:
bg_color = 'blue'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "TV")
elif line.adlib:
bg_color = 'purple'
tag_field += "<font backColor=%s textColor=%s fontSize=11>%s</font> " % (bg_color, fg_color, "ADLIB")
entries.append(tag_field)
return "<br />".join(entries)
def build_story(lines: List[ADRLine], tc_rate: TimecodeFormat):
story = list()
this_scene = None
scene_style = getSampleStyleSheet()['Normal']
scene_style.fontName = 'Futura'
scene_style.leftIndent = 0.
scene_style.leftPadding = 0.
scene_style.spaceAfter = 18.
line_style = getSampleStyleSheet()['Normal']
line_style.fontName = 'Futura'
for line in lines:
table_style = [('VALIGN', (0, 0), (-1, -1), 'TOP'),
('LEFTPADDING', (0, 0), (0, 0), 0.0),
('BOTTOMPADDING', (0, 0), (-1, -1), 24.)]
cue_number_field = "%s<br /><font fontSize=7>%s</font>" % (line.cue_number, line.character_name)
time_data = time_format(line.time_budget_mins)
if line.priority is not None:
time_data = time_data + "<br />" + "P: " + line.priority
aux_data_field = build_aux_data_field(line)
tc_data = build_tc_data(line, tc_rate)
line_table_data = [[Paragraph(cue_number_field, line_style),
Paragraph(tc_data, line_style),
Paragraph(line.prompt, line_style),
Paragraph(time_data, line_style),
Paragraph(aux_data_field, line_style)
]]
line_table = Table(data=line_table_data,
colWidths=[inch * 0.75, inch, inch * 3., 0.5 * inch, inch * 2.],
style=table_style)
if (line.scene or "[No Scene]") != this_scene:
this_scene = line.scene or "[No Scene]"
story.append(KeepTogether([
Spacer(1., 0.25 * inch),
Paragraph("<u>" + this_scene + "</u>", scene_style),
line_table]))
else:
line_table.setStyle(table_style)
story.append(KeepTogether([line_table]))
return story
def build_tc_data(line: ADRLine, tc_format: TimecodeFormat):
tc_data = tc_format.seconds_to_smpte(line.start) + "<br />" + \
tc_format.seconds_to_smpte(line.finish)
third_line = []
if line.reel is not None:
if line.reel[0:1] == 'R':
third_line.append("%s" % line.reel)
else:
third_line.append("Reel %s" % line.reel)
if line.version is not None:
third_line.append("(%s)" % line.version)
if len(third_line) > 0:
tc_data = tc_data + "<br/>" + " ".join(third_line)
return tc_data
def generate_report(page_size, lines: List[ADRLine], tc_rate: TimecodeFormat, character_number=None,
include_omitted=True):
if character_number is not None:
lines = [r for r in lines if r.character_id == character_number]
title = "%s ADR Report (%s)" % (lines[0].title, lines[0].character_name)
document_header = "%s ADR Report" % lines[0].character_name
else:
title = "%s ADR Report" % lines[0].title
document_header = 'ADR Report'
if not include_omitted:
lines = [line for line in lines if not line.omitted]
lines = sorted(lines, key=lambda line: line.start)
filename = title + ".pdf"
doc = make_doc_template(page_size=page_size,
filename=filename, document_title=title,
document_header=document_header,
title=lines[0].title,
supervisor=lines[0].supervisor,
client=lines[0].client,
document_subheader=lines[0].spot,
left_margin=0.75 * inch)
story = build_story(lines, tc_rate)
doc.build(story)
def output_report(lines: List[ADRLine], tc_display_format: TimecodeFormat,
page_size=portrait(letter), by_character=False):
if by_character:
character_numbers = set((r.character_id for r in lines))
for n in character_numbers:
generate_report(page_size, lines, tc_display_format, n)
else:
generate_report(page_size, lines, tc_display_format)

View File

@@ -0,0 +1,257 @@
from reportlab.pdfgen.canvas import Canvas
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.platypus import Paragraph
from .__init__ import GRect
from ptulsconv.broadcast_timecode import TimecodeFormat
from ptulsconv.docparser.adr_entity import ADRLine
import datetime
def draw_header_block(canvas, rect, record: ADRLine):
rect.draw_text_cell(canvas, record.cue_number, "Helvetica", 44, vertical_align='m')
def draw_character_row(canvas, rect, record: ADRLine):
label_frame, value_frame = rect.split_x(1.25 * inch)
label_frame.draw_text_cell(canvas, "CHARACTER", "Futura", 10, force_baseline=9.)
line = "%s / %s" % (record.character_id, record.character_name)
if record.actor_name is not None:
line = line + " / " + record.actor_name
value_frame.draw_text_cell(canvas, line, "Futura", 12, force_baseline=9.)
rect.draw_border(canvas, ['min_y', 'max_y'])
def draw_cue_number_block(canvas, rect, record: ADRLine):
(label_frame, number_frame,), aux_frame = rect.divide_y([0.20 * inch, 0.375 * inch], direction='d')
label_frame.draw_text_cell(canvas, "CUE NUMBER", "Futura", 10,
inset_y=5., vertical_align='t')
number_frame.draw_text_cell(canvas, record.cue_number, "Futura", 14,
inset_x=10., inset_y=2., draw_baseline=True)
tags = {'tv': 'TV',
'optional': 'OPT',
'adlib': 'ADLIB',
'effort': 'EFF',
'tbw': 'TBW',
'omitted': 'OMIT'}
tag_field = ""
for key in tags.keys():
if getattr(record, key):
tag_field = tag_field + tags[key] + " "
aux_frame.draw_text_cell(canvas, tag_field, "Futura", 10,
inset_x=10., inset_y=2., vertical_align='t')
rect.draw_border(canvas, 'max_x')
def draw_timecode_block(canvas, rect, record: ADRLine, tc_display_format: TimecodeFormat):
(in_label_frame, in_frame, out_label_frame, out_frame), _ = rect.divide_y(
[0.20 * inch, 0.25 * inch, 0.20 * inch, 0.25 * inch], direction='d')
in_label_frame.draw_text_cell(canvas, "IN", "Futura", 10,
vertical_align='t', inset_y=5., inset_x=5.)
in_frame.draw_text_cell(canvas, tc_display_format.seconds_to_smpte(record.start), "Futura", 14,
inset_x=10., inset_y=2., draw_baseline=True)
out_label_frame.draw_text_cell(canvas, "OUT", "Futura", 10,
vertical_align='t', inset_y=5., inset_x=5.)
out_frame.draw_text_cell(canvas, tc_display_format.seconds_to_smpte(record.finish), "Futura", 14,
inset_x=10., inset_y=2., draw_baseline=True)
rect.draw_border(canvas, 'max_x')
def draw_reason_block(canvas, rect, record: ADRLine):
reason_cell, notes_cell = rect.split_y(24., direction='d')
reason_label, reason_value = reason_cell.split_x(.75 * inch)
notes_label, notes_value = notes_cell.split_x(.75 * inch)
reason_label.draw_text_cell(canvas, "Reason:", "Futura", 12,
inset_x=5., inset_y=5., vertical_align='b')
reason_value.draw_text_cell(canvas, record.reason or "", "Futura", 12,
inset_x=5., inset_y=5., draw_baseline=True,
vertical_align='b')
notes_label.draw_text_cell(canvas, "Note:", "Futura", 12,
inset_x=5., inset_y=5., vertical_align='t')
style = getSampleStyleSheet()['BodyText']
style.fontName = 'Futura'
style.fontSize = 12
style.leading = 14
p = Paragraph(record.note or "", style)
notes_value.draw_flowable(canvas, p, draw_baselines=True, inset_x=5., inset_y=5.)
def draw_prompt(canvas, rect, prompt=""):
label, block = rect.split_y(0.20 * inch, direction='d')
label.draw_text_cell(canvas, "PROMPT", "Futura", 10, vertical_align='t', inset_y=5., inset_x=0.)
style = getSampleStyleSheet()['BodyText']
style.fontName = 'Futura'
style.fontSize = 14
style.leading = 24
style.leftIndent = 1.5 * inch
style.rightIndent = 1.5 * inch
p = Paragraph(prompt, style)
block.draw_flowable(canvas, p, draw_baselines=True)
rect.draw_border(canvas, 'max_y')
def draw_notes(canvas, rect, note=""):
label, block = rect.split_y(0.20 * inch, direction='d')
label.draw_text_cell(canvas, "NOTES", "Futura", 10, vertical_align='t', inset_y=5., inset_x=0.)
style = getSampleStyleSheet()['BodyText']
style.fontName = 'Futura'
style.fontSize = 14
style.leading = 24
prompt = Paragraph(note, style)
block.draw_flowable(canvas, prompt, draw_baselines=True)
rect.draw_border(canvas, ['max_y', 'min_y'])
def draw_take_grid(canvas, rect):
canvas.saveState()
cp = canvas.beginPath()
cp.rect(rect.min_x, rect.min_y, rect.width, rect.height)
canvas.clipPath(cp, stroke=0, fill=0)
canvas.setDash([3.0, 2.0])
for xi in range(1, 10):
x = xi * (rect.width / 10)
if xi % 5 == 0:
canvas.setDash(1, 0)
else:
canvas.setDash([2, 5])
ln = canvas.beginPath()
ln.moveTo(rect.min_x + x, rect.min_y)
ln.lineTo(rect.min_x + x, rect.max_y)
canvas.drawPath(ln)
for yi in range(1, 10):
y = yi * (rect.height / 6)
if yi % 2 == 0:
canvas.setDash(1, 0)
else:
canvas.setDash([2, 5])
ln = canvas.beginPath()
ln.moveTo(rect.min_x, rect.min_y + y)
ln.lineTo(rect.max_x, rect.min_y + y)
canvas.drawPath(ln)
rect.draw_border(canvas, 'max_x')
canvas.restoreState()
def draw_aux_block(canvas, rect, recording_time_sec_this_line, recording_time_sec):
rect.draw_border(canvas, 'min_x')
content_rect = rect.inset_xy(10., 10.)
lines, last_line = content_rect.divide_y([12., 12., 24., 24., 24., 24.], direction='d')
lines[0].draw_text_cell(canvas,
"Time for this line: %.1f mins" % (recording_time_sec_this_line / 60.), "Futura", 9.)
lines[1].draw_text_cell(canvas, "Running time: %03.1f mins" % (recording_time_sec / 60.), "Futura", 9.)
lines[2].draw_text_cell(canvas, "Actual Start: ______________", "Futura", 9., vertical_align='b')
lines[3].draw_text_cell(canvas, "Record Date: ______________", "Futura", 9., vertical_align='b')
lines[4].draw_text_cell(canvas, "Engineer: ______________", "Futura", 9., vertical_align='b')
lines[5].draw_text_cell(canvas, "Location: ______________", "Futura", 9., vertical_align='b')
def draw_footer(canvas, rect, record: ADRLine, report_date, line_no, total_lines):
rect.draw_border(canvas, 'max_y')
report_date_s = [report_date.strftime("%c")]
spotting_name = [record.spot] if record.spot is not None else []
pages_s = ["Line %i of %i" % (line_no, total_lines)]
footer_s = " - ".join(report_date_s + spotting_name + pages_s)
rect.draw_text_cell(canvas, footer_s, font_name="Futura", font_size=10., inset_y=2.)
def create_report_for_character(records, report_date, tc_display_format: TimecodeFormat):
outfile = "%s_%s_%s_Log.pdf" % (records[0].title,
records[0].character_id,
records[0].character_name,)
assert outfile is not None
assert outfile[-4:] == '.pdf', "Output file must have 'pdf' extension!"
pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
page: GRect = GRect(0, 0, letter[0], letter[1])
page = page.inset(inch * 0.5)
(header_row, char_row, data_row, prompt_row, notes_row, takes_row), footer = \
page.divide_y([0.875 * inch, 0.375 * inch, inch, 3.0 * inch, 1.5 * inch, 3 * inch], direction='d')
cue_header_block, title_header_block = header_row.split_x(4.0 * inch)
(cue_number_block, timecode_block), reason_block = data_row.divide_x([1.5 * inch, 1.5 * inch])
(take_grid_block), aux_block = takes_row.split_x(5.25 * inch)
c = Canvas(outfile, pagesize=letter,)
c.setTitle("%s %s (%s) Supervisor's Log" % (records[0].title, records[0].character_name,
records[0].character_id))
c.setAuthor(records[0].supervisor)
recording_time_sec = 0.0
total_lines = len(records)
line_n = 1
for record in records:
record: ADRLine
recording_time_sec_this_line: float = (record.time_budget_mins or 6.0) * 60.0
recording_time_sec = recording_time_sec + recording_time_sec_this_line
draw_header_block(c, cue_header_block, record)
# FIXME: Draw the title
# TODO: Integrate this report into the common DocTemplate api
# draw_title_box(c, title_header_block, record)
draw_character_row(c, char_row, record)
draw_cue_number_block(c, cue_number_block, record)
draw_timecode_block(c, timecode_block, record, tc_display_format=tc_display_format)
draw_reason_block(c, reason_block, record)
draw_prompt(c, prompt_row, prompt=record.prompt)
draw_notes(c, notes_row, note="")
draw_take_grid(c, take_grid_block)
draw_aux_block(c, aux_block, recording_time_sec_this_line, recording_time_sec)
draw_footer(c, footer, record, report_date, line_no=line_n, total_lines=total_lines)
line_n = line_n + 1
c.showPage()
c.save()
def output_report(lines, tc_display_format: TimecodeFormat):
report_date = datetime.datetime.now()
events = sorted(lines, key=lambda x: x.start)
character_numbers = set([x.character_id for x in lines])
for n in character_numbers:
create_report_for_character([e for e in events if e.character_id == n], report_date,
tc_display_format=tc_display_format)

View File

@@ -0,0 +1,76 @@
# -*- coding: utf-8 -*-
from typing import List
from .__init__ import make_doc_template
from reportlab.lib.units import inch
from reportlab.lib.pagesizes import letter
from reportlab.platypus import Paragraph, Spacer, KeepTogether, Table, HRFlowable
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib import colors
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
from ..broadcast_timecode import TimecodeFormat
from ..docparser.adr_entity import ADRLine
def output_report(lines: List[ADRLine], tc_display_format: TimecodeFormat):
character_numbers = set([n.character_id for n in lines])
pdfmetrics.registerFont(TTFont('Futura', 'Futura.ttc'))
for n in character_numbers:
char_lines = [line for line in lines if not line.omitted and line.character_id == n]
character_name = char_lines[0].character_name
sorted(char_lines, key=lambda line: line.start)
title = "%s (%s) %s ADR Script" % (char_lines[0].title, character_name, n)
filename = "%s_%s_%s_ADR Script.pdf" % (char_lines[0].title, n, character_name)
doc = make_doc_template(page_size=letter, filename=filename, document_title=title,
title=char_lines[0].title,
document_subheader=char_lines[0].spot,
supervisor=char_lines[0].supervisor,
client=char_lines[0].client,
document_header=character_name)
story = []
prompt_style = getSampleStyleSheet()['Normal']
prompt_style.fontName = 'Futura'
prompt_style.fontSize = 18.
prompt_style.leading = 24.
prompt_style.leftIndent = 1.5 * inch
prompt_style.rightIndent = 1.5 * inch
number_style = getSampleStyleSheet()['Normal']
number_style.fontName = 'Futura'
number_style.fontSize = 14
number_style.leading = 24
number_style.leftIndent = 0.
number_style.rightIndent = 0.
for line in char_lines:
start_tc = tc_display_format.seconds_to_smpte(line.start)
finish_tc = tc_display_format.seconds_to_smpte(line.finish)
data_block = [[Paragraph(line.cue_number, number_style),
Paragraph(start_tc + " - " + finish_tc, number_style)
]]
# RIGHTWARDS ARROW →
# Unicode: U+2192, UTF-8: E2 86 92
story.append(
KeepTogether(
[HRFlowable(width='50%', color=colors.black),
Table(data=data_block, colWidths=[1.5 * inch, 6. * inch],
style=[('LEFTPADDING', (0, 0), (-1, -1), 0.)]),
Paragraph(line.prompt, prompt_style),
Spacer(1., inch * 1.5)]
)
)
doc.build(story)

View File

@@ -1,152 +0,0 @@
from parsimonious.nodes import NodeVisitor, Node
class DictionaryParserVisitor(NodeVisitor):
def visit_document(self, node: Node, visited_children) -> dict:
files = next(iter(visited_children[1]), None)
clips = next(iter(visited_children[2]), None)
plugins = next(iter(visited_children[3]), None)
tracks = next(iter(visited_children[4]), None)
markers = next(iter(visited_children[5]), None)
return dict(header=visited_children[0],
files=files,
clips=clips,
plugins=plugins,
tracks=tracks,
markers=markers)
@staticmethod
def visit_header(node, visited_children):
tc_drop = False
for _ in visited_children[20]:
tc_drop = True
return dict(session_name=visited_children[2],
sample_rate=visited_children[6],
bit_depth=visited_children[10],
start_timecode=visited_children[15],
timecode_format=visited_children[19],
timecode_drop_frame=tc_drop,
count_audio_tracks=visited_children[25],
count_clips=visited_children[29],
count_files=visited_children[33])
@staticmethod
def visit_files_section(node, visited_children):
return list(map(lambda child: dict(filename=child[0], path=child[2]), visited_children[2]))
@staticmethod
def visit_clips_section(node, visited_children):
channel = next(iter(visited_children[2][3]), 1)
return list(map(lambda child: dict(clip_name=child[0], file=child[2], channel=channel),
visited_children[2]))
@staticmethod
def visit_plugin_listing(node, visited_children):
return list(map(lambda child: dict(manufacturer=child[0],
plugin_name=child[2],
version=child[4],
format=child[6],
stems=child[8],
count_instances=child[10]),
visited_children[2]))
@staticmethod
def visit_track_block(node, visited_children):
track_header, track_clip_list = visited_children
clips = []
for clip in track_clip_list:
if clip[0] != None:
clips.append(clip[0])
plugins = []
for plugin_opt in track_header[16]:
for plugin in plugin_opt[1]:
plugins.append(plugin[1])
return dict(
name=track_header[2],
comments=track_header[6],
user_delay_samples=track_header[10],
state=track_header[14],
plugins=plugins,
clips=clips
)
@staticmethod
def visit_track_listing(node, visited_children):
return visited_children[1]
@staticmethod
def visit_track_clip_entry(node, visited_children):
timestamp = None
if isinstance(visited_children[14], list):
timestamp = visited_children[14][0][0]
return dict(channel=visited_children[0],
event=visited_children[3],
clip_name=visited_children[6],
start_time=visited_children[8],
end_time=visited_children[10],
duration=visited_children[12],
timestamp=timestamp,
state=visited_children[15])
@staticmethod
def visit_track_state_list(node, visited_children):
states = []
for next_state in visited_children:
states.append(next_state[0][0].text)
return states
@staticmethod
def visit_track_clip_state(node, visited_children):
return node.text
@staticmethod
def visit_markers_listing(node, visited_children):
markers = []
for marker in visited_children[2]:
markers.append(marker)
return markers
@staticmethod
def visit_marker_record(node, visited_children):
return dict(number=visited_children[0],
location=visited_children[3],
time_reference=visited_children[5],
units=visited_children[8],
name=visited_children[10],
comments=visited_children[12])
@staticmethod
def visit_formatted_clip_name(_, visited_children):
return visited_children[1].text
@staticmethod
def visit_string_value(node, visited_children):
return node.text.strip(" ")
@staticmethod
def visit_integer_value(node, visited_children):
return int(node.text)
# def visit_timecode_value(self, node, visited_children):
# return node.text.strip(" ")
@staticmethod
def visit_float_value(node, visited_children):
return float(node.text)
def visit_block_ending(self, node, visited_children):
pass
def generic_visit(self, node, visited_children):
""" The generic visit method. """
return visited_children or node

View File

@@ -1,22 +1,33 @@
import sys import sys
def print_banner_style(str):
if sys.stderr.isatty():
sys.stderr.write("\n\033[1m%s\033[0m\n\n" % str)
else:
sys.stderr.write("\n%s\n\n" % str)
def print_section_header_style(str): def print_banner_style(message):
if sys.stderr.isatty(): if sys.stderr.isatty():
sys.stderr.write("\n\033[4m%s\033[0m\n\n" % str) sys.stderr.write("\n\033[1m%s\033[0m\n\n" % message)
else: else:
sys.stderr.write("%s\n\n" % str) sys.stderr.write("\n%s\n\n" % message)
def print_status_style(str):
def print_section_header_style(message):
if sys.stderr.isatty(): if sys.stderr.isatty():
sys.stderr.write("\033[3m - %s\033[0m\n" % str) sys.stderr.write("\n\033[4m%s\033[0m\n\n" % message)
else: else:
sys.stderr.write(" - %s\n" % str) sys.stderr.write("%s\n\n" % message)
def print_status_style(message):
if sys.stderr.isatty():
sys.stderr.write("\033[3m - %s\033[0m\n" % message)
else:
sys.stderr.write(" - %s\n" % message)
def print_warning(warning_string):
if sys.stderr.isatty():
sys.stderr.write("\033[3m - %s\033[0m\n" % warning_string)
else:
sys.stderr.write(" - %s\n" % warning_string)
def print_advisory_tagging_error(failed_string, position, parent_track_name=None, clip_time=None): def print_advisory_tagging_error(failed_string, position, parent_track_name=None, clip_time=None):
if sys.stderr.isatty(): if sys.stderr.isatty():
@@ -27,25 +38,26 @@ def print_advisory_tagging_error(failed_string, position, parent_track_name=None
sys.stderr.write("\033[32m\"%s\033[31;1m%s\"\033[0m\n" % (ok_string, not_ok_string)) sys.stderr.write("\033[32m\"%s\033[31;1m%s\"\033[0m\n" % (ok_string, not_ok_string))
if parent_track_name is not None: if parent_track_name is not None:
sys.stderr.write(" ! > On track \"%s\"\n" % (parent_track_name)) sys.stderr.write(" ! > On track \"%s\"\n" % parent_track_name)
if clip_time is not None: if clip_time is not None:
sys.stderr.write(" ! > In clip name at %s\n" % (clip_time)) sys.stderr.write(" ! > In clip name at %s\n" % clip_time)
else: else:
sys.stderr.write("\n") sys.stderr.write("\n")
sys.stderr.write(" ! Tagging error: \"%s\"\n" % failed_string) sys.stderr.write(" ! Tagging error: \"%s\"\n" % failed_string)
sys.stderr.write(" ! %s _______________⬆\n" % (" " * position)) sys.stderr.write(" ! %s _______________⬆\n" % (" " * position))
if parent_track_name is not None: if parent_track_name is not None:
sys.stderr.write(" ! > On track \"%s\"\n" % (parent_track_name)) sys.stderr.write(" ! > On track \"%s\"\n" % parent_track_name)
if clip_time is not None: if clip_time is not None:
sys.stderr.write(" ! > In clip name at %s\n" % (clip_time)) sys.stderr.write(" ! > In clip name at %s\n" % clip_time)
sys.stderr.write("\n") sys.stderr.write("\n")
def print_fatal_error(str):
def print_fatal_error(message):
if sys.stderr.isatty(): if sys.stderr.isatty():
sys.stderr.write("\n\033[5;31;1m*** %s ***\033[0m\n" % str) sys.stderr.write("\n\033[5;31;1m*** %s ***\033[0m\n" % message)
else: else:
sys.stderr.write("\n%s\n" % str) sys.stderr.write("\n%s\n" % message)

View File

@@ -1,266 +0,0 @@
from . import broadcast_timecode
from parsimonious import Grammar, NodeVisitor
from parsimonious.exceptions import IncompleteParseError
import math
import sys
from .reporting import print_advisory_tagging_error, print_section_header_style, print_status_style
from tqdm import tqdm
class Transformation:
def transform(self, input_dict) -> dict:
return input_dict
class TimecodeInterpreter(Transformation):
def __init__(self):
self.apply_session_start = False
def transform(self, input_dict: dict) -> dict:
print_section_header_style('Converting Timecodes')
retval = super().transform(input_dict)
rate = input_dict['header']['timecode_format']
start_tc = self.convert_time(input_dict['header']['start_timecode'], rate,
drop_frame=input_dict['header']['timecode_drop_frame'])
retval['header']['start_timecode_decoded'] = start_tc
print_status_style('Converted start timecode.')
retval['tracks'] = self.convert_tracks(input_dict['tracks'], timecode_rate=rate,
drop_frame=retval['header']['timecode_drop_frame'])
print_status_style('Converted clip timecodes for %i tracks.' % len(retval['tracks']))
for marker in retval['markers']:
marker['location_decoded'] = self.convert_time(marker['location'], rate,
drop_frame=retval['header']['timecode_drop_frame'])
print_status_style('Converted %i markers.' % len(retval['markers']))
return retval
def convert_tracks(self, tracks, timecode_rate, drop_frame):
for track in tracks:
new_clips = []
for clip in track['clips']:
new_clips.append(self.convert_clip(clip, drop_frame=drop_frame, timecode_rate=timecode_rate))
track['clips'] = new_clips
return tracks
def convert_clip(self, clip, timecode_rate, drop_frame):
time_fields = ['start_time', 'end_time', 'duration', 'timestamp']
for time_field in time_fields:
if clip[time_field] is not None:
clip[time_field + "_decoded"] = self.convert_time(clip[time_field], drop_frame=drop_frame,
frame_rate=timecode_rate)
return clip
def convert_time(self, time_string, frame_rate, drop_frame=False):
lfps = math.ceil(frame_rate)
frame_count = broadcast_timecode.smpte_to_frame_count(time_string, lfps, drop_frame_hint=drop_frame)
return dict(frame_count=frame_count, logical_fps=lfps, drop_frame=drop_frame)
class TagInterpreter(Transformation):
tag_grammar = Grammar(
r"""
document = modifier? line? word_sep? tag_list?
line = word (word_sep word)*
tag_list = tag*
tag = key_tag / short_tag / full_text_tag / tag_junk
key_tag = "[" key "]" word_sep?
short_tag = "$" key "=" word word_sep?
full_text_tag = "{" key "=" value "}" word_sep?
key = ~"[A-Za-z][A-Za-z0-9_]*"
value = ~"[^}]+"
tag_junk = word word_sep?
word = ~"[^ \[\{\$][^ ]*"
word_sep = ~" +"
modifier = ("@" / "&") word_sep?
"""
)
class TagListVisitor(NodeVisitor):
def visit_document(self, _, visited_children):
modifier_opt, line_opt, _, tag_list_opt = visited_children
return dict(line=next(iter(line_opt), None),
tags=next(iter(tag_list_opt), None),
mode=next(iter(modifier_opt), 'Normal')
)
def visit_line(self, node, _):
return str.strip(node.text, " ")
def visit_modifier(self, node, _):
if node.text.startswith('@'):
return 'Timespan'
elif node.text.startswith('&'):
return 'Append'
else:
return 'Normal'
def visit_tag_list(self, _, visited_children):
retdict = dict()
for child in visited_children:
if child[0] is not None:
k, v = child[0]
retdict[k] = v
return retdict
def visit_key_tag(self, _, children):
return children[1].text, children[1].text
def visit_short_tag(self, _, children):
return children[1].text, children[3].text
def visit_full_text_tag(self, _, children):
return children[1].text, children[3].text
def visit_tag_junk(self, node, _):
return None
def generic_visit(self, node, visited_children):
return visited_children or node
def __init__(self, ignore_muted=True, show_progress=False, log_output=sys.stderr):
self.visitor = TagInterpreter.TagListVisitor()
self.ignore_muted = ignore_muted
self.show_progress = show_progress
self.log_output = log_output
def transform(self, input_dict: dict) -> dict:
transformed = list()
timespan_rules = list()
print_section_header_style('Parsing Tags')
title_tags = self.parse_tags(input_dict['header']['session_name'])
markers = sorted(input_dict['markers'], key=lambda m: m['location_decoded']['frame_count'])
if self.show_progress:
track_iter = tqdm(input_dict['tracks'], desc="Reading tracks...", unit='Track')
else:
track_iter = input_dict['tracks']
for track in track_iter:
if 'Muted' in track['state'] and self.ignore_muted:
continue
track_tags = self.parse_tags(track['name'], parent_track_name=track['name'])
comment_tags = self.parse_tags(track['comments'], parent_track_name=track['name'])
track_context_tags = track_tags['tags']
track_context_tags.update(comment_tags['tags'])
for clip in track['clips']:
if clip['state'] == 'Muted' and self.ignore_muted:
continue
clip_tags = self.parse_tags(clip['clip_name'], parent_track_name=track['name'], clip_time=clip['start_time'])
clip_start = clip['start_time_decoded']['frame_count']
if clip_tags['mode'] == 'Normal':
event = dict()
event.update(title_tags['tags'])
event.update(track_context_tags)
event.update(self.effective_timespan_tags_at_time(timespan_rules, clip_start))
event.update(self.effective_marker_tags_at_time(markers, clip_start))
event.update(clip_tags['tags'])
event['PT.Track.Name'] = track_tags['line']
event['PT.Session.Name'] = title_tags['line']
event['PT.Clip.Number'] = clip['event']
event['PT.Clip.Name'] = clip_tags['line']
event['PT.Clip.Start'] = clip['start_time']
event['PT.Clip.Finish'] = clip['end_time']
event['PT.Clip.Start_Frames'] = clip_start
event['PT.Clip.Finish_Frames'] = clip['end_time_decoded']['frame_count']
event['PT.Clip.Start_Seconds'] = clip_start / input_dict['header']['timecode_format']
event['PT.Clip.Finish_Seconds'] = clip['end_time_decoded']['frame_count'] / input_dict['header'][
'timecode_format']
transformed.append(event)
elif clip_tags['mode'] == 'Append':
assert len(transformed) > 0, "First clip is in '&'-Append mode, fatal error."
transformed[-1].update(clip_tags['tags'])
transformed[-1]['PT.Clip.Name'] = transformed[-1]['PT.Clip.Name'] + " " + clip_tags['line']
transformed[-1]['PT.Clip.Finish_Frames'] = clip['end_time_decoded']['frame_count']
transformed[-1]['PT.Clip.Finish'] = clip['end_time']
transformed[-1]['PT.Clip.Finish_Seconds'] = clip['end_time_decoded']['frame_count'] / input_dict['header'][
'timecode_format']
elif clip_tags['mode'] == 'Timespan':
rule = dict(start_time=clip_start,
end_time=clip['end_time_decoded']['frame_count'],
tags=clip_tags['tags'])
timespan_rules.append(rule)
print_status_style('Processed %i clips' % len(transformed))
return dict(header=input_dict['header'], events=transformed)
def effective_timespan_tags_at_time(_, rules, time) -> dict:
retval = dict()
for rule in rules:
if rule['start_time'] <= time <= rule['end_time']:
retval.update(rule['tags'])
return retval
def effective_marker_tags_at_time(self, markers, time):
retval = dict()
for marker in markers:
marker_name_tags = self.parse_tags(marker['name'], marker_index=marker['number'])
marker_comment_tags = self.parse_tags(marker['comments'], marker_index=marker['number'])
effective_tags = marker_name_tags['tags']
effective_tags.update(marker_comment_tags['tags'])
if marker['location_decoded']['frame_count'] <= time:
retval.update(effective_tags)
else:
break
return retval
def parse_tags(self, source, parent_track_name=None, clip_time=None, marker_index=None):
try:
parse_tree = self.tag_grammar.parse(source)
return self.visitor.visit(parse_tree)
except IncompleteParseError as e:
print_advisory_tagging_error(failed_string=source,
parent_track_name=parent_track_name,
clip_time=clip_time, position=e.pos)
trimmed_source = source[:e.pos]
parse_tree = self.tag_grammar.parse(trimmed_source)
return self.visitor.visit(parse_tree)
class SubclipOfSequence(Transformation):
def __init__(self, start, end):
self.start = start
self.end = end
def transform(self, input_dict: dict) -> dict:
out_events = []
offset = self.start
offset_sec = self.start / input_dict['header']['timecode_format']
for event in input_dict['events']:
if self.start <= event['PT.Clip.Start_Frames'] <= self.end:
e = event
e['PT.Clip.Start_Frames'] = event['PT.Clip.Start_Frames'] - offset
e['PT.Clip.Finish_Frames'] = event['PT.Clip.Finish_Frames'] - offset
e['PT.Clip.Start_Seconds'] = event['PT.Clip.Start_Seconds'] - offset_sec
e['PT.Clip.Finish_Seconds'] = event['PT.Clip.Finish_Seconds'] - offset_sec
out_events.append(e)
return dict(events=out_events)

70
ptulsconv/validations.py Normal file
View File

@@ -0,0 +1,70 @@
from dataclasses import dataclass
from ptulsconv.docparser.adr_entity import ADRLine
from typing import Iterator, Optional
@dataclass
class ValidationError:
message: str
event: Optional[ADRLine] = None
def report_message(self):
if self.event is not None:
return f"{self.message}: event at {self.event.start} with number {self.event.cue_number}"
else:
return self.message
def validate_unique_count(input_lines: Iterator[ADRLine], field='title', count=1):
values = set(list(map(lambda e: getattr(e, field), input_lines)))
if len(values) > count:
yield ValidationError(message="Field {} has too many values (max={}): {}".format(field, count, values))
def validate_value(input_lines: Iterator[ADRLine], key_field, predicate):
for event in input_lines:
val = getattr(event, key_field)
if not predicate(val):
yield ValidationError(message='Field {} not in range'.format(val),
event=event)
def validate_unique_field(input_lines: Iterator[ADRLine], field='cue_number', scope=None):
values = dict()
for event in input_lines:
this = getattr(event, field)
if scope is not None:
key = getattr(event, scope)
else:
key = '_values'
values.setdefault(key, set())
if this in values[key]:
yield ValidationError(message='Re-used {}'.format(field), event=event)
else:
values[key].update(this)
def validate_non_empty_field(input_lines: Iterator[ADRLine], field='cue_number'):
for event in input_lines:
if getattr(event, field, None) is None:
yield ValidationError(message='Empty field {}'.format(field), event=event)
def validate_dependent_value(input_lines: Iterator[ADRLine], key_field, dependent_field):
"""
Validates that two events with the same value in `key_field` always have the
same value in `dependent_field`
"""
key_values = set((getattr(x, key_field) for x in input_lines))
for key_value in key_values:
rows = [(getattr(x, key_field), getattr(x, dependent_field)) for x in input_lines
if getattr(x, key_field) == key_value]
unique_rows = set(rows)
if len(unique_rows) > 1:
message = "Non-unique values for key {} = ".format(key_field)
for u in unique_rows:
message = message + "\n - {} -> {}".format(u[0], u[1])
yield ValidationError(message=message, event=None)

164
ptulsconv/xml/common.py Normal file
View File

@@ -0,0 +1,164 @@
import os
import os.path
import pathlib
import subprocess
import sys
import glob
import datetime
from xml.etree.ElementTree import TreeBuilder, tostring
from typing import List
import ptulsconv
from ptulsconv.docparser.adr_entity import ADRLine
# TODO Get a third-party test for Avid Marker lists
def avid_marker_list(lines: List[ADRLine], report_date=datetime.datetime.now(), reel_start_frame=0, fps=24):
doc = TreeBuilder(element_factory=None)
doc.start('Avid:StreamItems', {'xmlns:Avid': 'http://www.avid.com'})
doc.start('Avid:XMLFileData', {})
doc.start('AvProp', {'name': 'DomainMagic', 'type': 'string'})
doc.data("Domain")
doc.end('AvProp')
doc.start('AvProp', {'name': 'DomainKey', 'type': 'string'})
doc.data("58424a44")
doc.end('AvProp')
def insert_elem(kind, attb, atype, name, value):
doc.start('ListElem', {})
doc.start('AvProp', {'id': 'ATTR',
'name': 'OMFI:ATTB:Kind',
'type': 'int32'})
doc.data(kind)
doc.end('AvProp')
doc.start('AvProp', {'id': 'ATTR',
'name': 'OMFI:ATTB:Name',
'type': 'string'})
doc.data(name)
doc.end('AvProp')
doc.start('AvProp', {'id': 'ATTR',
'name': attb,
'type': atype})
doc.data(value)
doc.end('AvProp')
doc.end('ListElem')
for line in lines:
doc.start('AvClass', {'id': 'ATTR'})
doc.start('AvProp', {'id': 'ATTR', 'name': '__OMFI:ATTR:NumItems', 'type': 'int32'})
doc.data('7')
doc.end('AvProp')
doc.start('List', {'id': 'OMFI:ATTR:AttrRefs'})
insert_elem('1', 'OMFI:ATTB:IntAttribute', 'int32', '_ATN_CRM_LONG_CREATE_DATE', report_date.strftime("%s"))
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_COLOR', 'yellow')
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_USER', line.supervisor or "")
marker_name = "%s: %s" % (line.cue_number, line.prompt)
insert_elem('2', 'OMFI:ATTB:StringAttribute', 'string', '_ATN_CRM_COM', marker_name)
start_frame = int(line.start * fps)
insert_elem('2', "OMFI:ATTB:StringAttribute", 'string', '_ATN_CRM_TC',
str(start_frame - reel_start_frame))
insert_elem('2', "OMFI:ATTB:StringAttribute", 'string', '_ATN_CRM_TRK', 'V1')
insert_elem('1', "OMFI:ATTB:IntAttribute", 'int32', '_ATN_CRM_LENGTH', '1')
doc.start('ListElem', {})
doc.end('ListElem')
doc.end('List')
doc.end('AvClass')
doc.end('Avid:XMLFileData')
doc.end('Avid:StreamItems')
def dump_fmpxml(data, input_file_name, output, adr_field_map):
doc = TreeBuilder(element_factory=None)
doc.start('FMPXMLRESULT', {'xmlns': 'http://www.filemaker.com/fmpxmlresult'})
doc.start('ERRORCODE', {})
doc.data('0')
doc.end('ERRORCODE')
doc.start('PRODUCT', {'NAME': ptulsconv.__name__, 'VERSION': ptulsconv.__version__})
doc.end('PRODUCT')
doc.start('DATABASE', {'DATEFORMAT': 'MM/dd/yy', 'LAYOUT': 'summary', 'TIMEFORMAT': 'hh:mm:ss',
'RECORDS': str(len(data['events'])), 'NAME': os.path.basename(input_file_name)})
doc.end('DATABASE')
doc.start('METADATA', {})
for field in adr_field_map:
tp = field[2]
ft = 'TEXT'
if tp is int or tp is float:
ft = 'NUMBER'
doc.start('FIELD', {'EMPTYOK': 'YES', 'MAXREPEAT': '1', 'NAME': field[1], 'TYPE': ft})
doc.end('FIELD')
doc.end('METADATA')
doc.start('RESULTSET', {'FOUND': str(len(data['events']))})
for event in data['events']:
doc.start('ROW', {})
for field in adr_field_map:
doc.start('COL', {})
doc.start('DATA', {})
for key_attempt in field[0]:
if key_attempt in event.keys():
doc.data(str(event[key_attempt]))
break
doc.end('DATA')
doc.end('COL')
doc.end('ROW')
doc.end('RESULTSET')
doc.end('FMPXMLRESULT')
docelem = doc.close()
xmlstr = tostring(docelem, encoding='unicode', method='xml')
output.write(xmlstr)
xslt_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt')
def xform_options():
return glob.glob(os.path.join(xslt_path, "*.xsl"))
def dump_xform_options(output=sys.stdout):
print("# Available transforms:", file=output)
print("# Transform dir: %s" % xslt_path, file=output)
for f in xform_options():
base = os.path.basename(f)
name, _ = os.path.splitext(base)
print("# " + name, file=output)
def fmp_transformed_dump(data, input_file, xsl_name, output, adr_field_map):
from ptulsconv.reporting import print_status_style
import io
pipe = io.StringIO()
print_status_style("Generating base XML")
dump_fmpxml(data, input_file, pipe, adr_field_map)
str_data = pipe.getvalue()
print_status_style("Base XML size %i" % (len(str_data)))
print_status_style("Running xsltproc")
xsl_path = os.path.join(pathlib.Path(__file__).parent.absolute(), 'xslt', xsl_name + ".xsl")
print_status_style("Using xsl: %s" % xsl_path)
subprocess.run(['xsltproc', xsl_path, '-'],
input=str_data, text=True,
stdout=output, shell=False, check=True)

View File

@@ -37,8 +37,16 @@
<AvProp id="ATTR" name="OMFI:ATTB:Kind" type="int32">2</AvProp> <AvProp id="ATTR" name="OMFI:ATTB:Kind" type="int32">2</AvProp>
<AvProp id="ATTR" name="OMFI:ATTB:Name" type="string">_ATN_CRM_COM</AvProp> <AvProp id="ATTR" name="OMFI:ATTB:Name" type="string">_ATN_CRM_COM</AvProp>
<AvProp id="ATTR" name="OMFI:ATTB:StringAttribute" type="string"> <AvProp id="ATTR" name="OMFI:ATTB:StringAttribute" type="string">
<xsl:value-of select="concat(fmp:COL[15]/fmp:DATA, ': ', fmp:COL[21]/fmp:DATA)"/> <xsl:value-of select="concat('(',fmp:COL[14]/fmp:DATA,') ',fmp:COL[15]/fmp:DATA, ': ', fmp:COL[21]/fmp:DATA, ' ')"/>
[Reason: <xsl:value-of select="fmp:COL[18]/fmp:DATA" />]</AvProp> <xsl:choose>
<xsl:when test="fmp:COL[18]/fmp:DATA != ''">[Reason: <xsl:value-of select="fmp:COL[18]/fmp:DATA" />]
</xsl:when>
<xsl:otherwise> </xsl:otherwise>
</xsl:choose>
<xsl:choose>
<xsl:when test="fmp:COL[23]/fmp:DATA != ''">[Note: <xsl:value-of select="fmp:COL[23]/fmp:DATA" />]</xsl:when>
</xsl:choose>
</AvProp>
</ListElem> </ListElem>
<ListElem> <ListElem>
<AvProp id="ATTR" name="OMFI:ATTB:Kind" type="int32">2</AvProp> <AvProp id="ATTR" name="OMFI:ATTB:Kind" type="int32">2</AvProp>

30
ptulsconv/xslt/SRT.xsl Normal file
View File

@@ -0,0 +1,30 @@
<?xml version="1.0" encoding="UTF-8"?>
<xsl:transform version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:fmp="http://www.filemaker.com/fmpxmlresult">
<xsl:output method="text" encoding="windows-1252"/>
<xsl:template match="/">
<xsl:for-each select="/fmp:FMPXMLRESULT/fmp:RESULTSET/fmp:ROW">
<xsl:sort data-type="number" select="number(fmp:COL[9]/fmp:DATA)" />
<xsl:value-of select="concat(position() ,'&#xA;')" />
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[9]/fmp:DATA) div 3600),'00'), ':')" />
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[9]/fmp:DATA) div 60),'00'), ':')" />
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[9]/fmp:DATA) mod 60),'00'), ',')" />
<xsl:value-of select="format-number((number(fmp:COL[9]/fmp:DATA) - floor(number(fmp:COL[9]/fmp:DATA))) * 1000,'000')" />
<xsl:text> --> </xsl:text>
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[10]/fmp:DATA) div 3600),'00'), ':')" />
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[10]/fmp:DATA) div 60),'00'), ':')" />
<xsl:value-of select="concat(format-number(floor(number(fmp:COL[10]/fmp:DATA) mod 60),'00'), ',')" />
<xsl:value-of select="format-number((number(fmp:COL[10]/fmp:DATA) - floor(number(fmp:COL[10]/fmp:DATA))) * 1000,'000')" />
<xsl:value-of select="concat('&#xA;',fmp:COL[15]/fmp:DATA, ': ', fmp:COL[21]/fmp:DATA)"/>
<xsl:value-of select="'&#xA;&#xA;'" />
</xsl:for-each>
</xsl:template>
</xsl:transform>

5
requirements.txt Normal file
View File

@@ -0,0 +1,5 @@
setuptools~=56.2.0
reportlab~=3.5.67
ffmpeg~=1.4
parsimonious~=0.8.1
tqdm~=4.60.0

View File

@@ -30,10 +30,13 @@ setup(name='ptulsconv',
"Topic :: Text Processing :: Markup :: XML"], "Topic :: Text Processing :: Markup :: XML"],
packages=['ptulsconv'], packages=['ptulsconv'],
keywords='text-processing parsers film tv editing editorial', keywords='text-processing parsers film tv editing editorial',
install_requires=['parsimonious', 'tqdm'], install_requires=['parsimonious', 'tqdm', 'reportlab'],
package_data={
"ptulsconv": ["xslt/*.xsl"]
},
entry_points={ entry_points={
'console_scripts': [ 'console_scripts': [
'ptulsconv = ptulsconv.__main__:main' 'ptulsconv = ptulsconv.__main__:main'
] ]
} }
) )

38
tests/test_adr_entity.py Normal file
View File

@@ -0,0 +1,38 @@
import unittest
from ptulsconv.docparser.tag_compiler import Event
from ptulsconv.docparser.adr_entity import ADRLine, make_entity
from fractions import Fraction
class TestADREntity(unittest.TestCase):
def test_event2line(self):
tags = {
'Ver': '1.0',
'Actor': "Bill",
'CN': "1",
'QN': 'J1001',
'R': 'Noise',
'EFF': 'EFF'
}
event = Event(clip_name='"This is a test." (sotto voce)',
track_name="Justin",
session_name="Test Project",
tags=tags,
start=Fraction(0, 1), finish=Fraction(1, 1))
line = make_entity(event)
self.assertIsInstance(line, ADRLine)
self.assertEqual('Bill', line.actor_name)
self.assertEqual('Justin', line.character_name)
self.assertEqual('"This is a test." (sotto voce)', line.prompt)
self.assertEqual('Noise', line.reason)
self.assertEqual('J1001', line.cue_number)
self.assertEqual(True, line.effort)
self.assertEqual('Test Project', line.title)
self.assertEqual('1.0', line.version)
if __name__ == '__main__':
unittest.main()

View File

@@ -1,9 +1,9 @@
import unittest import unittest
from ptulsconv import broadcast_timecode from ptulsconv import broadcast_timecode
from fractions import Fraction
class TestBroadcastTimecode(unittest.TestCase): class TestBroadcastTimecode(unittest.TestCase):
def test_basic_to_framecount(self): def test_basic_to_frame_count(self):
r1 = "01:00:00:00" r1 = "01:00:00:00"
f1 = broadcast_timecode.smpte_to_frame_count(r1, 24, False) f1 = broadcast_timecode.smpte_to_frame_count(r1, 24, False)
self.assertEqual(f1, 86_400) self.assertEqual(f1, 86_400)
@@ -32,19 +32,7 @@ class TestBroadcastTimecode(unittest.TestCase):
s1 = broadcast_timecode.frame_count_to_smpte(c1, 30, drop_frame=True) s1 = broadcast_timecode.frame_count_to_smpte(c1, 30, drop_frame=True)
self.assertEqual(s1, "01:00:03;18") self.assertEqual(s1, "01:00:03;18")
def test_fractional_to_framecount(self): def test_drop_frame_to_frame_count(self):
s1 = "00:00:01:04.105"
c1, f1 = broadcast_timecode.smpte_to_frame_count(s1, 24, drop_frame_hint=False, include_fractional=True)
self.assertEqual(c1, 28)
self.assertEqual(f1, 0.105)
def test_fractional_to_string(self):
c1 = 99
f1 = .145
s1 = broadcast_timecode.frame_count_to_smpte(c1, 25, drop_frame=False, fractional_frame=f1)
self.assertEqual(s1, "00:00:03:24.145")
def test_drop_frame_to_framecount(self):
r1 = "01:00:00;00" r1 = "01:00:00;00"
z1 = broadcast_timecode.smpte_to_frame_count(r1, 30, drop_frame_hint=True) z1 = broadcast_timecode.smpte_to_frame_count(r1, 30, drop_frame_hint=True)
self.assertEqual(z1, 107_892) self.assertEqual(z1, 107_892)
@@ -61,17 +49,13 @@ class TestBroadcastTimecode(unittest.TestCase):
f3 = broadcast_timecode.smpte_to_frame_count(r3, 30, True) f3 = broadcast_timecode.smpte_to_frame_count(r3, 30, True)
self.assertEqual(f3, 1799) self.assertEqual(f3, 1799)
def test_footage_to_framecount(self): def test_footage_to_frame_count(self):
s1 = "194+11" s1 = "194+11"
f1 = broadcast_timecode.footage_to_frame_count(s1) f1 = broadcast_timecode.footage_to_frame_count(s1)
self.assertEqual(f1, 3115) self.assertEqual(f1, 3115)
s2 = "1+1.014"
f2 = broadcast_timecode.footage_to_frame_count(s2, include_fractional=True)
self.assertEqual(f2, (17, 0.014))
s3 = "0+0.1" s3 = "0+0.1"
f3 = broadcast_timecode.footage_to_frame_count(s3, include_fractional=False) f3 = broadcast_timecode.footage_to_frame_count(s3)
self.assertEqual(f3, 0) self.assertEqual(f3, 0)
def test_frame_count_to_footage(self): def test_frame_count_to_footage(self):
@@ -79,10 +63,13 @@ class TestBroadcastTimecode(unittest.TestCase):
s1 = broadcast_timecode.frame_count_to_footage(c1) s1 = broadcast_timecode.frame_count_to_footage(c1)
self.assertEqual(s1, "1+03") self.assertEqual(s1, "1+03")
c2 = 24 def test_seconds_to_smpte(self):
f2 = .1 secs = Fraction(25, 24)
s2 = broadcast_timecode.frame_count_to_footage(c2, fractional_frames=f2) frame_duration = Fraction(1, 24)
self.assertEqual(s2, "1+08.100") tc_format = broadcast_timecode.TimecodeFormat(frame_duration=frame_duration, logical_fps=24, drop_frame=False)
s1 = tc_format.seconds_to_smpte(secs)
self.assertEqual(s1, "00:00:01:01")
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -0,0 +1,24 @@
import unittest
from ptulsconv.docparser.doc_entity import HeaderDescriptor
from fractions import Fraction
class DocParserTestCase(unittest.TestCase):
def test_header(self):
header = HeaderDescriptor(session_name="Test Session",
sample_rate=48000.0,
bit_depth=24,
start_timecode="00:59:52:00",
timecode_format="30",
timecode_drop_frame=False,
count_audio_tracks=0,
count_clips=0,
count_files=0)
self.assertEqual(header.session_name, "Test Session")
self.assertEqual(header.start_time, Fraction((59 * 60 + 52) * 30, 30))
if __name__ == '__main__':
unittest.main()

View File

@@ -1,6 +1,5 @@
import unittest import unittest
import ptulsconv from ptulsconv.docparser import parse_document
#import pprint
import os.path import os.path
@@ -8,102 +7,77 @@ class TestRobinHood1(unittest.TestCase):
path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting.txt' path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting.txt'
def test_header_export(self): def test_header_export(self):
with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
self.assertTrue('header' in parsed.keys()) session = parse_document(self.path)
self.assertEqual(parsed['header']['session_name'], 'Robin Hood Spotting')
self.assertEqual(parsed['header']['sample_rate'], 48000.0) self.assertIsNotNone(session.header)
self.assertEqual(parsed['header']['bit_depth'], 24) self.assertEqual(session.header.session_name, 'Robin Hood Spotting')
self.assertEqual(parsed['header']['timecode_format'], 29.97) self.assertEqual(session.header.sample_rate, 48000.0)
self.assertEqual(parsed['header']['timecode_drop_frame'], False) self.assertEqual(session.header.bit_depth, 24)
self.assertEqual(session.header.timecode_fps, '29.97')
self.assertEqual(session.header.timecode_drop_frame, False)
def test_all_sections(self): def test_all_sections(self):
with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
self.assertIn('header', parsed.keys()) session = parse_document(self.path)
self.assertIn('files', parsed.keys())
self.assertIn('clips', parsed.keys()) self.assertIsNotNone(session.header)
self.assertIn('plugins', parsed.keys()) self.assertIsNotNone(session.files)
self.assertIn('tracks', parsed.keys()) self.assertIsNotNone(session.clips)
self.assertIn('markers', parsed.keys()) self.assertIsNotNone(session.plugins)
self.assertIsNotNone(session.tracks)
self.assertIsNotNone(session.markers)
def test_tracks(self): def test_tracks(self):
with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor() session = parse_document(self.path)
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result) self.assertEqual(len(session.tracks), 14)
self.assertEqual(len(parsed['tracks']), 14) self.assertListEqual(["Scenes", "Robin", "Will", "Marian", "John",
self.assertListEqual(["Scenes", "Robin", "Will", "Marian", "John", "Guy", "Much", "Butcher", "Town Crier",
"Guy", "Much", "Butcher", "Town Crier", "Soldier 1", "Soldier 2", "Soldier 3",
"Soldier 1", "Soldier 2", "Soldier 3", "Priest", "Guest at Court"],
"Priest", "Guest at Court"], list(map(lambda t: t.name, session.tracks)))
list(map(lambda n: n['name'], parsed['tracks']))) self.assertListEqual(["", "[ADR] {Actor=Errol Flynn} $CN=1",
self.assertListEqual(["", "[ADR] {Actor=Errol Flynn} $CN=1", "[ADR] {Actor=Patrick Knowles} $CN=2",
"[ADR] {Actor=Patrick Knowles} $CN=2", "[ADR] {Actor=Olivia DeHavilland} $CN=3",
"[ADR] {Actor=Olivia DeHavilland} $CN=3", "[ADR] {Actor=Claude Raines} $CN=4",
"[ADR] {Actor=Claude Raines} $CN=4", "[ADR] {Actor=Basil Rathbone} $CN=5",
"[ADR] {Actor=Basil Rathbone} $CN=5", "[ADR] {Actor=Herbert Mundin} $CN=6",
"[ADR] {Actor=Herbert Mundin} $CN=6", "[ADR] {Actor=George Bunny} $CN=101",
"[ADR] {Actor=George Bunny} $CN=101", "[ADR] {Actor=Leonard Mundie} $CN=102",
"[ADR] {Actor=Leonard Mundie} $CN=102", "[ADR] $CN=103",
"[ADR] $CN=103", "[ADR] $CN=104",
"[ADR] $CN=104", "[ADR] $CN=105",
"[ADR] $CN=105", "[ADR] {Actor=Thomas R. Mills} $CN=106",
"[ADR] {Actor=Thomas R. Mills} $CN=106", "[ADR] $CN=107"],
"[ADR] $CN=107"], list(map(lambda t: t.comments, session.tracks)))
list(map(lambda n: n['comments'], parsed['tracks'])))
def test_a_track(self): def test_a_track(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor() guy_track = session.tracks[5]
result = ptulsconv.protools_text_export_grammar.parse(f.read()) self.assertEqual(guy_track.name, 'Guy')
parsed: dict = visitor.visit(result) self.assertEqual(guy_track.comments, '[ADR] {Actor=Basil Rathbone} $CN=5')
guy_track = parsed['tracks'][5] self.assertEqual(guy_track.user_delay_samples, 0)
self.assertEqual(guy_track['name'], 'Guy') self.assertListEqual(guy_track.state, [])
self.assertEqual(guy_track['comments'], '[ADR] {Actor=Basil Rathbone} $CN=5') self.assertEqual(len(guy_track.clips), 16)
self.assertEqual(guy_track['user_delay_samples'], 0) self.assertEqual(guy_track.clips[5].channel, 1)
self.assertListEqual(guy_track['state'], []) self.assertEqual(guy_track.clips[5].event, 6)
self.assertEqual(len(guy_track['clips']), 16) self.assertEqual(guy_track.clips[5].clip_name, "\"What's your name? You Saxon dog!\" $QN=GY106")
self.assertEqual(guy_track['clips'][5]['channel'], 1) self.assertEqual(guy_track.clips[5].start_timecode, "01:04:19:15")
self.assertEqual(guy_track['clips'][5]['event'], 6) self.assertEqual(guy_track.clips[5].finish_timecode, "01:04:21:28")
self.assertEqual(guy_track['clips'][5]['clip_name'], "\"What's your name? You Saxon dog!\" $QN=GY106") self.assertEqual(guy_track.clips[5].duration, "00:00:02:13")
self.assertEqual(guy_track['clips'][5]['start_time'], "01:04:19:15") self.assertEqual(guy_track.clips[5].timestamp, None)
self.assertEqual(guy_track['clips'][5]['end_time'], "01:04:21:28") self.assertEqual(guy_track.clips[5].state, 'Unmuted')
self.assertEqual(guy_track['clips'][5]['duration'], "00:00:02:13")
self.assertEqual(guy_track['clips'][5]['timestamp'], None)
self.assertEqual(guy_track['clips'][5]['state'], 'Unmuted')
def test_memory_locations(self): def test_memory_locations(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
self.assertEqual(len(parsed['markers']),1)
self.assertEqual(parsed['markers'][0]['number'], 1)
self.assertEqual(parsed['markers'][0]['location'], "01:00:00:00")
self.assertEqual(parsed['markers'][0]['time_reference'], 0)
self.assertEqual(parsed['markers'][0]['units'], "Samples")
def test_transform_timecode(self):
parsed = dict()
with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed = visitor.visit(result)
xformer = ptulsconv.TimecodeInterpreter()
xformer.apply_session_start = True
xformed = xformer.transform(parsed)
self.assertEqual(len(session.markers), 1)
self.assertEqual(session.markers[0].number, 1)
self.assertEqual(session.markers[0].location, "01:00:00:00")
self.assertEqual(session.markers[0].time_reference, 0)
self.assertEqual(session.markers[0].units, "Samples")
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -1,5 +1,5 @@
import unittest import unittest
import ptulsconv from ptulsconv.docparser import parse_document
import os.path import os.path
@@ -7,50 +7,38 @@ class TestRobinHood5(unittest.TestCase):
path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting5.txt' path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting5.txt'
def test_skipped_segments(self): def test_skipped_segments(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor() self.assertIsNone(session.files)
result = ptulsconv.protools_text_export_grammar.parse(f.read()) self.assertIsNone(session.clips)
parsed: dict = visitor.visit(result)
self.assertIsNone(parsed['files'])
self.assertIsNone(parsed['clips'])
def test_plugins(self): def test_plugins(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor() self.assertEqual(len(session.plugins), 2)
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
self.assertEqual(len(parsed['plugins']), 2)
def test_stereo_track(self): def test_stereo_track(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor() self.assertEqual(session.tracks[1].name, 'MX WT (Stereo)')
result = ptulsconv.protools_text_export_grammar.parse(f.read()) self.assertEqual(len(session.tracks[1].clips), 2)
parsed: dict = visitor.visit(result) self.assertEqual(session.tracks[1].clips[0].clip_name, 'RobinHood.1-01.L')
self.assertEqual(parsed['tracks'][1]['name'], 'MX WT (Stereo)') self.assertEqual(session.tracks[1].clips[1].clip_name, 'RobinHood.1-01.R')
self.assertEqual(len(parsed['tracks'][1]['clips']), 2)
self.assertEqual(parsed['tracks'][1]['clips'][0]['clip_name'], 'RobinHood.1-01.L')
self.assertEqual(parsed['tracks'][1]['clips'][1]['clip_name'], 'RobinHood.1-01.R')
def test_a_track(self): def test_a_track(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
guy_track = parsed['tracks'][8] guy_track = session.tracks[8]
self.assertEqual(guy_track['name'], 'Guy') self.assertEqual(guy_track.name, 'Guy')
self.assertEqual(guy_track['comments'], '[ADR] {Actor=Basil Rathbone} $CN=5') self.assertEqual(guy_track.comments, '[ADR] {Actor=Basil Rathbone} $CN=5')
self.assertEqual(guy_track['user_delay_samples'], 0) self.assertEqual(guy_track.user_delay_samples, 0)
self.assertListEqual(guy_track['state'], ['Solo']) self.assertListEqual(guy_track.state, ['Solo'])
self.assertEqual(len(guy_track['clips']), 16) self.assertEqual(len(guy_track.clips), 16)
self.assertEqual(guy_track['clips'][5]['channel'], 1) self.assertEqual(guy_track.clips[5].channel, 1)
self.assertEqual(guy_track['clips'][5]['event'], 6) self.assertEqual(guy_track.clips[5].event, 6)
self.assertEqual(guy_track['clips'][5]['clip_name'], "\"What's your name? You Saxon dog!\" $QN=GY106") self.assertEqual(guy_track.clips[5].clip_name, "\"What's your name? You Saxon dog!\" $QN=GY106")
self.assertEqual(guy_track['clips'][5]['start_time'], "01:04:19:15.00") self.assertEqual(guy_track.clips[5].start_timecode, "01:04:19:15.00")
self.assertEqual(guy_track['clips'][5]['end_time'], "01:04:21:28.00") self.assertEqual(guy_track.clips[5].finish_timecode, "01:04:21:28.00")
self.assertEqual(guy_track['clips'][5]['duration'], "00:00:02:13.00") self.assertEqual(guy_track.clips[5].duration, "00:00:02:13.00")
self.assertEqual(guy_track['clips'][5]['timestamp'], "01:04:19:09.70") self.assertEqual(guy_track.clips[5].timestamp, "01:04:19:09.70")
self.assertEqual(guy_track['clips'][5]['state'], 'Unmuted') self.assertEqual(guy_track.clips[5].state, 'Unmuted')
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -1,5 +1,5 @@
import unittest import unittest
import ptulsconv from ptulsconv.docparser import parse_document
import os.path import os.path
@@ -7,25 +7,24 @@ class TestRobinHood6(unittest.TestCase):
path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting6.txt' path = os.path.dirname(__file__) + '/export_cases/Robin Hood Spotting6.txt'
def test_a_track(self): def test_a_track(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read()) marian_track = session.tracks[6]
parsed: dict = visitor.visit(result) self.assertEqual(marian_track.name, 'Marian')
marian_track = parsed['tracks'][6] self.assertEqual(marian_track.comments, '[ADR] {Actor=Olivia DeHavilland} $CN=3')
self.assertEqual(marian_track['name'], 'Marian') self.assertEqual(marian_track.user_delay_samples, 0)
self.assertEqual(marian_track['comments'], '[ADR] {Actor=Olivia DeHavilland} $CN=3') self.assertListEqual(marian_track.state, ['Solo'])
self.assertEqual(marian_track['user_delay_samples'], 0) self.assertEqual(len(marian_track.clips), 4)
self.assertListEqual(marian_track['state'], ['Solo']) self.assertListEqual(marian_track.plugins, ['Channel Strip (mono)', 'ReVibe II (mono/5.1)'])
self.assertEqual(len(marian_track['clips']), 4) self.assertEqual(marian_track.clips[2].channel, 1)
self.assertListEqual(marian_track['plugins'], ['Channel Strip (mono)', 'ReVibe II (mono/5.1)']) self.assertEqual(marian_track.clips[2].event, 3)
self.assertEqual(marian_track['clips'][2]['channel'], 1) self.assertEqual(marian_track.clips[2].clip_name,
self.assertEqual(marian_track['clips'][2]['event'], 3) "\"Isn't that reason enough for a Royal Ward who must obey her guardian?\" $QN=M103")
self.assertEqual(marian_track['clips'][2]['clip_name'], "\"Isn't that reason enough for a Royal Ward who must obey her guardian?\" $QN=M103") self.assertEqual(marian_track.clips[2].start_timecode, "01:08:01:11")
self.assertEqual(marian_track['clips'][2]['start_time'], "01:08:01:11") self.assertEqual(marian_track.clips[2].finish_timecode, "01:08:04:24")
self.assertEqual(marian_track['clips'][2]['end_time'], "01:08:04:24") self.assertEqual(marian_track.clips[2].duration, "00:00:03:12")
self.assertEqual(marian_track['clips'][2]['duration'], "00:00:03:12") self.assertEqual(marian_track.clips[2].timestamp, "01:08:01:11")
self.assertEqual(marian_track['clips'][2]['timestamp'], "01:08:01:11") self.assertEqual(marian_track.clips[2].state, 'Unmuted')
self.assertEqual(marian_track['clips'][2]['state'], 'Unmuted')
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -1,5 +1,5 @@
import unittest import unittest
import ptulsconv from ptulsconv.docparser import parse_document
import os.path import os.path
@@ -7,30 +7,23 @@ class TestRobinHoodDF(unittest.TestCase):
path = os.path.dirname(__file__) + '/export_cases/Robin Hood SpottingDF.txt' path = os.path.dirname(__file__) + '/export_cases/Robin Hood SpottingDF.txt'
def test_header_export_df(self): def test_header_export_df(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor() self.assertEqual(session.header.timecode_drop_frame, True)
result = ptulsconv.protools_text_export_grammar.parse(f.read())
parsed: dict = visitor.visit(result)
self.assertTrue('header' in parsed.keys())
self.assertEqual(parsed['header']['timecode_drop_frame'], True)
def test_a_track(self): def test_a_track(self):
with open(self.path, 'r') as f: session = parse_document(self.path)
visitor = ptulsconv.DictionaryParserVisitor()
result = ptulsconv.protools_text_export_grammar.parse(f.read()) guy_track = session.tracks[4]
parsed: dict = visitor.visit(result) self.assertEqual(guy_track.name, 'Robin')
guy_track = parsed['tracks'][4] self.assertEqual(guy_track.comments, '[ADR] {Actor=Errol Flynn} $CN=1')
self.assertEqual(guy_track['name'], 'Robin') self.assertEqual(guy_track.user_delay_samples, 0)
self.assertEqual(guy_track['comments'], '[ADR] {Actor=Errol Flynn} $CN=1') self.assertListEqual(guy_track.state, [])
self.assertEqual(guy_track['user_delay_samples'], 0) self.assertEqual(len(guy_track.clips), 10)
self.assertListEqual(guy_track['state'], []) self.assertEqual(guy_track.clips[5].channel, 1)
self.assertEqual(len(guy_track['clips']), 10) self.assertEqual(guy_track.clips[5].event, 6)
self.assertEqual(guy_track['clips'][5]['channel'], 1) self.assertEqual(guy_track.clips[5].clip_name, "\"Hold there! What's his fault?\" $QN=R106")
self.assertEqual(guy_track['clips'][5]['event'], 6) self.assertEqual(guy_track.clips[5].start_timecode, "01:05:30;15")
self.assertEqual(guy_track['clips'][5]['clip_name'], "\"Hold there! What's his fault?\" $QN=R106") self.assertEqual(guy_track.clips[5].finish_timecode, "01:05:32;01")
self.assertEqual(guy_track['clips'][5]['start_time'], "01:05:30;15") self.assertEqual(guy_track.clips[5].duration, "00:00:01;16")
self.assertEqual(guy_track['clips'][5]['end_time'], "01:05:32;01") self.assertEqual(guy_track.clips[5].timestamp, None)
self.assertEqual(guy_track['clips'][5]['duration'], "00:00:01;16") self.assertEqual(guy_track.clips[5].state, 'Unmuted')
self.assertEqual(guy_track['clips'][5]['timestamp'], None)
self.assertEqual(guy_track['clips'][5]['state'], 'Unmuted')

124
tests/test_tag_compiler.py Normal file
View File

@@ -0,0 +1,124 @@
import unittest
import ptulsconv.docparser.tag_compiler
from ptulsconv.docparser import doc_entity
from fractions import Fraction
class TestTagCompiler(unittest.TestCase):
def test_one_track(self):
c = ptulsconv.docparser.tag_compiler.TagCompiler()
test_session = self.make_test_session()
c.session = test_session
events = c.compile_events()
event1 = next(events)
self.assertEqual('This is clip 1', event1.clip_name)
self.assertEqual('Track 1', event1.track_name)
self.assertEqual('Test Session', event1.session_name)
self.assertEqual(dict(A='A',
Color='Blue',
Ver='1.1',
Mode='2',
Comment='This is some text in the comments',
Part='1'), event1.tags)
self.assertEqual(Fraction(3600, 1), event1.start)
event2 = next(events)
self.assertEqual("This is the second clip ...and this is the last clip", event2.clip_name)
self.assertEqual('Track 1', event2.track_name)
self.assertEqual('Test Session', event2.session_name)
self.assertEqual(dict(R='Noise', A='A', B='B',
Color='Red',
Comment='This is some text in the comments',
N='1', Mode='2',
Ver='1.1',
M1='M1',
Part='2'), event2.tags)
self.assertEqual(c.session.header.convert_timecode('01:00:01:10'), event2.start)
self.assertEqual(c.session.header.convert_timecode('01:00:03:00'), event2.finish)
self.assertIsNone(next(events, None))
def test_tag_list(self):
session = self.make_test_session()
c = ptulsconv.docparser.tag_compiler.TagCompiler()
c.session = session
all_tags = c.compile_tag_list()
self.assertTrue(all_tags['Mode'] == {'2', '1'})
@staticmethod
def make_test_session():
test_header = doc_entity.HeaderDescriptor(session_name="Test Session $Ver=1.1",
sample_rate=48000,
timecode_format="24",
timecode_drop_frame=False,
bit_depth=24,
start_timecode='00:59:00:00',
count_audio_tracks=1,
count_clips=3,
count_files=0
)
test_clips = [
doc_entity.TrackClipDescriptor(channel=1, event=1,
clip_name='This is clip 1 {Color=Blue} $Mode=2',
start_time='01:00:00:00',
finish_time='01:00:01:03',
duration='00:00:01:03',
state='Unmuted',
timestamp=None),
doc_entity.TrackClipDescriptor(channel=1, event=2,
clip_name='This is the second clip {R=Noise} [B] $Mode=2',
start_time='01:00:01:10',
finish_time='01:00:02:00',
duration='00:00:00:14',
state='Unmuted',
timestamp=None),
doc_entity.TrackClipDescriptor(channel=1, event=3,
clip_name='& ...and this is the last clip $N=1 $Mode=2',
start_time='01:00:02:00',
finish_time='01:00:03:00',
duration='00:00:01:00',
state='Unmuted',
timestamp=None),
]
test_track = doc_entity.TrackDescriptor(name="Track 1 [A] {Color=Red} $Mode=1",
comments="{Comment=This is some text in the comments}",
user_delay_samples=0,
plugins=[],
state=[],
clips=test_clips)
markers = [doc_entity.MarkerDescriptor(number=1,
location="01:00:00:00",
time_reference=48000 * 60,
units="Samples",
name="Marker 1 {Part=1}",
comments=""
),
doc_entity.MarkerDescriptor(number=2,
location="01:00:01:00",
time_reference=48000 * 61,
units="Samples",
name="Marker 2 {Part=2}",
comments="[M1]"
),
]
test_session = doc_entity.SessionDescriptor(header=test_header,
tracks=[test_track],
clips=[],
files=[],
markers=markers,
plugins=[])
return test_session
if __name__ == '__main__':
unittest.main()

View File

@@ -1,41 +1,39 @@
import unittest import unittest
from ptulsconv.transformations import TagInterpreter from ptulsconv.docparser.tagged_string_parser_visitor import parse_tags, TagPreModes
class TestTagInterpreter(unittest.TestCase): class TestTagInterpreter(unittest.TestCase):
def test_line(self): def test_line(self):
ti = TagInterpreter() s1 = parse_tags("this is a test")
s1 = ti.parse_tags("this is a test") self.assertEqual(s1.content, "this is a test")
self.assertEqual(s1['line'], "this is a test") self.assertEqual(s1.mode, TagPreModes.NORMAL)
self.assertEqual(s1['mode'], 'Normal') self.assertEqual(len(s1.tag_dict), 0)
self.assertEqual(len(s1['tags']), 0)
s2 = ti.parse_tags("this! IS! Me! ** Typing! 123 <> |||") s2 = parse_tags("this! IS! Me! ** Typing! 123 <> |||")
self.assertEqual(s2['line'], "this! IS! Me! ** Typing! 123 <> |||") self.assertEqual(s2.content, "this! IS! Me! ** Typing! 123 <> |||")
self.assertEqual(s2['mode'], 'Normal') self.assertEqual(s2.mode, TagPreModes.NORMAL)
self.assertEqual(len(s2['tags']), 0) self.assertEqual(len(s2.tag_dict), 0)
def test_tags(self): def test_tags(self):
ti = TagInterpreter() s1 = parse_tags("{a=100}")
s1 = ti.parse_tags("{a=100}") self.assertEqual(s1.tag_dict['a'], "100")
self.assertIn('tags', s1)
self.assertEqual(s1['tags']['a'], "100")
s2 = ti.parse_tags("{b=This is a test} [option] $X=9") s2 = parse_tags("{b=This is a test} [option] $X=9")
self.assertEqual(s2['tags']['b'], 'This is a test') self.assertEqual(s2.tag_dict['b'], 'This is a test')
self.assertEqual(s2['tags']['option'], 'option') self.assertEqual(s2.tag_dict['option'], 'option')
self.assertEqual(s2['tags']['X'], "9") self.assertEqual(s2.tag_dict['X'], "9")
def test_modes(self): def test_modes(self):
ti = TagInterpreter() s1 = parse_tags("@ Monday Tuesday {a=1}")
s1 = ti.parse_tags("@ Monday Tuesday {a=1}") self.assertEqual(s1.mode, TagPreModes.TIMESPAN)
self.assertEqual(s1['mode'], 'Timespan')
s2 = ti.parse_tags("Monday Tuesday {a=1}") s2 = parse_tags("Monday Tuesday {a=1}")
self.assertEqual(s2['mode'], 'Normal') self.assertEqual(s2.mode, TagPreModes.NORMAL)
s3 = parse_tags("&Monday Tuesday {a=1}")
self.assertEqual(s3.mode, TagPreModes.APPEND)
s3 = ti.parse_tags("&Monday Tuesday {a=1}")
self.assertEqual(s3['mode'], 'Append')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -1,91 +1,69 @@
import unittest import unittest
import ptulsconv from ptulsconv.docparser import doc_entity, doc_parser_visitor, ptuls_grammar, tag_compiler
import os.path import os.path
class TaggingIntegratedTests(unittest.TestCase):
class TaggingIntegratedTests(unittest.TestCase):
path = os.path.dirname(__file__) + '/export_cases/Tag Tests/Tag Tests.txt' path = os.path.dirname(__file__) + '/export_cases/Tag Tests/Tag Tests.txt'
def test_event_list(self): def test_event_list(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor() document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
result = ptulsconv.protools_text_export_grammar.parse(f.read()) document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
parsed: dict = visitor.visit(result) compiler = tag_compiler.TagCompiler()
compiler.session = document
tcxform = ptulsconv.transformations.TimecodeInterpreter() events = list(compiler.compile_events())
tagxform = ptulsconv.transformations.TagInterpreter(show_progress=False,
ignore_muted=True,
log_output=False)
parsed = tcxform.transform(parsed) self.assertEqual(9, len(events))
parsed = tagxform.transform(parsed) self.assertEqual("Clip Name", events[0].clip_name)
self.assertEqual("Lorem ipsum", events[1].clip_name)
self.assertEqual(9, len(parsed['events'])) self.assertEqual("Dolor sic amet the rain in spain", events[2].clip_name)
self.assertEqual("Clip Name", parsed['events'][0]['PT.Clip.Name']) self.assertEqual("A B C", events[3].clip_name)
self.assertEqual("Lorem ipsum" , parsed['events'][1]['PT.Clip.Name']) self.assertEqual("Silver Bridge", events[4].clip_name)
self.assertEqual("Dolor sic amet the rain in spain" , parsed['events'][2]['PT.Clip.Name']) self.assertEqual("Region 02", events[5].clip_name)
self.assertEqual("A B C" , parsed['events'][3]['PT.Clip.Name']) self.assertEqual("Region 12", events[6].clip_name)
self.assertEqual("Silver Bridge" , parsed['events'][4]['PT.Clip.Name']) self.assertEqual("Region 22", events[7].clip_name)
self.assertEqual("Region 02" , parsed['events'][5]['PT.Clip.Name']) self.assertEqual("Region 04", events[8].clip_name)
self.assertEqual("Region 12" , parsed['events'][6]['PT.Clip.Name'])
self.assertEqual("Region 22" , parsed['events'][7]['PT.Clip.Name'])
self.assertEqual("Region 04" , parsed['events'][8]['PT.Clip.Name'])
def test_append(self): def test_append(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor() document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
result = ptulsconv.protools_text_export_grammar.parse(f.read()) document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
parsed: dict = visitor.visit(result) compiler = tag_compiler.TagCompiler()
compiler.session = document
tcxform = ptulsconv.transformations.TimecodeInterpreter() events = list(compiler.compile_events())
tagxform = ptulsconv.transformations.TagInterpreter(show_progress=False,
ignore_muted=True,
log_output=False)
parsed = tcxform.transform(parsed) self.assertTrue(len(events) > 2)
parsed = tagxform.transform(parsed)
self.assertTrue(len(parsed['events']) > 2) self.assertEqual("Dolor sic amet the rain in spain", events[2].clip_name)
self.assertEqual("Dolor sic amet the rain in spain", self.assertEqual(document.header.convert_timecode("01:00:10:00"), events[2].start)
parsed['events'][2]['PT.Clip.Name']) self.assertEqual(document.header.convert_timecode("01:00:25:00"), events[2].finish)
self.assertTrue("01:00:10:00", parsed['events'][2]['PT.Clip.Start']) self.assertIn('X', events[2].tags.keys())
self.assertTrue("01:00:25:00", parsed['events'][2]['PT.Clip.Finish']) self.assertIn('ABC', events[2].tags.keys())
self.assertTrue(240, parsed['events'][2]['PT.Clip.Start_Frames']) self.assertIn('A', events[2].tags.keys())
self.assertTrue(600, parsed['events'][2]['PT.Clip.Finish_Frames']) self.assertEqual('302', events[2].tags['X'])
self.assertEqual('ABC', events[2].tags['ABC'])
self.assertIn('X', parsed['events'][2].keys()) self.assertEqual('1', events[2].tags['A'])
self.assertIn('ABC', parsed['events'][2].keys())
self.assertIn('A', parsed['events'][2].keys())
self.assertEqual('302', parsed['events'][2]['X'])
self.assertEqual('ABC', parsed['events'][2]['ABC'])
self.assertEqual('1', parsed['events'][2]['A'])
def test_successive_appends(self): def test_successive_appends(self):
with open(self.path, 'r') as f: with open(self.path, 'r') as f:
visitor = ptulsconv.DictionaryParserVisitor() document_ast = ptuls_grammar.protools_text_export_grammar.parse(f.read())
result = ptulsconv.protools_text_export_grammar.parse(f.read()) document: doc_entity.SessionDescriptor = doc_parser_visitor.DocParserVisitor().visit(document_ast)
parsed: dict = visitor.visit(result) compiler = tag_compiler.TagCompiler()
compiler.session = document
tcxform = ptulsconv.transformations.TimecodeInterpreter() events = list(compiler.compile_events())
tagxform = ptulsconv.transformations.TagInterpreter(show_progress=False,
ignore_muted=True,
log_output=False)
parsed = tcxform.transform(parsed) self.assertTrue(len(events) > 3)
parsed = tagxform.transform(parsed)
self.assertTrue(len(parsed['events']) > 3) self.assertEqual("A B C", events[3].clip_name)
self.assertEqual("A B C",
parsed['events'][3]['PT.Clip.Name'])
self.assertTrue("01:00:15:00", parsed['events'][3]['PT.Clip.Start'])
self.assertTrue("01:00:45:00", parsed['events'][3]['PT.Clip.Finish'])
self.assertTrue(80, parsed['events'][3]['PT.Clip.Start_Frames'])
self.assertTrue(1080, parsed['events'][3]['PT.Clip.Finish_Frames'])
self.assertEqual(document.header.convert_timecode("01:00:15:00"), events[3].start)
self.assertEqual(document.header.convert_timecode("01:00:45:00"), events[3].finish)
if __name__ == '__main__': if __name__ == '__main__':

20
tests/test_utils.py Normal file
View File

@@ -0,0 +1,20 @@
import unittest
from ptulsconv.docparser.tag_compiler import apply_appends
class MyTestCase(unittest.TestCase):
def test_something(self):
v = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
expected = [1, 2, 7, 5, 6, 15, 9, 10]
should = (lambda x, y: y % 4 == 0)
do_combine = (lambda x, y: x + y)
r = apply_appends(iter(v), should, do_combine)
r1 = list(r)
self.assertEqual(r1, expected)
if __name__ == '__main__':
unittest.main()