# HG changeset patch
# User Daniel Wolff
# Date 1455988464 -3600
# Node ID e34cf1b6fe09e3b4c1cac9c722d86c40e9a4d3c6
commit
diff -r 000000000000 -r e34cf1b6fe09 .hg_archival.txt
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/.hg_archival.txt Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,6 @@
+repo: 5338cdb8ab02aa075b105d89db072ea3f2a2d411
+node: bf372b30d2e3bbf4ffbd6061ca79676f9d0d233e
+branch: public
+latesttag: null
+latesttagdistance: 280
+changessincelatesttag: 317
diff -r 000000000000 -r e34cf1b6fe09 .hgignore
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/.hgignore Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,5 @@
+\.orig$
+\.orig\..*$
+\.chg\..*$
+\.rej$
+\.pyc$
diff -r 000000000000 -r e34cf1b6fe09 COPYING
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/COPYING Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,674 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+
+ Copyright (C)
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ Copyright (C)
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+.
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+.
diff -r 000000000000 -r e34cf1b6fe09 LICENSES
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/LICENSES Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,3 @@
+This distribution includes the following components:
+
+SPMF Sequential Pattern Mining Library,GNU General Public License (GPL) v3,http://www.philippe-fournier-viger.com/spmf/
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/chord_sequence_mining/chord2function.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/chord_sequence_mining/chord2function.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,328 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+__author__="Daniel Wolff"
+
+import re
+
+# these for file reading etc
+import fnmatch
+import os
+import csv
+import spmf
+
+import sys
+sys.path.insert(0, '../tools/')
+import csv2json as c2j
+
+# ---
+# roots
+# ---
+chord_roots = ["C","D","E","F","G","A","B"]
+
+# create a dictionary for efficiency
+roots_dic = dict(zip(chord_roots, [0,2,4,5,7,9,11]))
+
+mode_lbls = ['major','minor']
+mode_dic = dict(zip(mode_lbls, range(0,2)))
+# ---
+# types
+# ---
+type_labels = ["", "6", "7", "m","m6", "m7", "maj7", "m7b5", "dim", "dim7", "aug"]
+type_dic = dict(zip(type_labels, range(0,len(type_labels))))
+
+base_labels = ["1","","2","b3","3","4","","5","","6","b7","7"]
+#base_dic = dict(zip(base_labels, range(0,len(base_labels))))
+
+# functions
+root_funs_maj = ['I','#I','II','#II','III','IV','#IV','V','#V','VI','#VI','VII']
+root_funs_min = ['I','#I','II','III','#III','IV','#IV','V','VI','#VI','VII','#VII']
+# dan's suggestion
+#root_funs_maj = ['I','#I','II','#II','(M)III','IV','#IV','V','#V','VI','#VI','(M)VII']
+#root_funs_min = ['I','#I','II','(m)III','#III','IV','#IV','V','VI','#VI','(m)VII','#VII']
+
+fun_dic_maj = dict(zip(range(0,len(root_funs_maj)),root_funs_maj))
+fun_dic_min = dict(zip(range(0,len(root_funs_min)),root_funs_min))
+# regex that separates roots and types, and gets chord base
+# this only accepts chords with a sharp (#) and no flats
+p = re.compile(r'(?P[A-G,N](#|b)*)(?P[a-z,0-9]*)(/(?P[A-G](#|b)*))*')
+p2 = re.compile(r'(?P[A-G](#|b)*)(\s/\s[A-G](#|b)*)*\s(?P[major|minor]+)')
+pclip = re.compile(r'(?P[A-Z,0-9]+(\-|_)[A-Z,0-9]+((\-|_)[A-Z,0-9]+)*((\-|_)[A-Z,0-9]+)*)_(?Pvamp.*).(?P(csv|xml|txt|n3)+)')
+
+ftype = {'key': 'vamp_qm-vamp-plugins_qm-keydetector_key',
+ 'chord': 'vamp_nnls-chroma_chordino_simplechord'}
+
+# most simple note2num
+def note2num(notein = 'Cb'):
+ base = roots_dic[notein[0]]
+ if len(notein) > 1:
+ if notein[1] == 'b':
+ return (base - 1) % 12
+ elif notein[1] == '#':
+ return (base + 1) % 12
+ else:
+ print "Error parsing chord " + notein
+ raise
+ else:
+ return base % 12
+
+
+# convert key to number
+def key2num(keyin = 'C major'):
+ # ---
+ # parse key string: separate root from rest
+ # ---
+ sepstring = p2.match(keyin)
+ if not sepstring:
+ print "Error parsing key " + keyin
+ raise
+
+ # get relative position of chord and adapt for flats
+ key = sepstring.group('key')
+ key = note2num(key)
+
+ # ---
+ # parse mode. care for (unknown) string
+ # ---
+ mode = sepstring.group('mode')
+ if mode:
+ mode = mode_dic[mode]
+ else:
+ mode = -1
+
+ return (key, mode)
+
+
+
+# convert chord to relative function
+def chord2function(cin = 'B',key=3, mode=0):
+ # ---
+ # parse chord string: separate root from rest
+ # ---
+ sepstring = p.match(cin)
+
+ # test for N code -> no chord detected
+ if sepstring.group('root') == 'N':
+ return (-1,-1,-1,-1)
+
+ # get root and type otherwise
+ root = note2num(sepstring.group('root'))
+ type = sepstring.group('type')
+
+ typ = type_dic[type]
+
+ # get relative position
+ fun = (root - key) % 12
+
+ #--- do we have a base key?
+ # if yes return it relative to chord root
+ # ---
+ if sepstring.group('base'):
+ broot = note2num(sepstring.group('base'))
+ bfun = (broot - root) % 12
+ else:
+ # this standard gives 1 as a base key if not specified otherwise
+ bfun = 0
+
+
+ # ---
+ # todo: integrate bfun in final type list
+ # ---
+
+ return (root,fun,typ,bfun)
+
+# reads in any csv and returns a list of structure
+# time(float), data1, data2 ....data2
+def read_vamp_csv(filein = ''):
+ output = []
+ with open(filein, 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ output.append([float(row[0])] + row[1:])
+ return output
+
+# legacy:: finds featurefile for given piece
+def find_features(clipin = '', type='key'):
+ # ---
+ # These Parametres are for the high-level parse functions
+ # ---
+ featuredirs = {'key':'.\qm_vamp_key_standard.n3_50ac9',
+ 'chord': '.\chordino_simple.n3_1a812'}
+
+ # search for featurefile
+ featuredir = featuredirs[type].replace('\\', '/')
+ for file in os.listdir(featuredir):
+ if fnmatch.fnmatch(file, clipin+ '*' + ftype[type] + '*.csv'):
+ return featuredirs[type] + '/' + file
+
+# reads features for given clip and of specified type
+def get_features(clipin = '', type='key', featurefiles = 0):
+ if not featurefiles:
+ featurefiles = find_features(clipin, type)
+ return read_vamp_csv(featurefiles[type])
+
+# histogram of the last entry in a list
+# returns the most frequently used key
+def histogram(keysin = []):
+ # build histogram
+ histo = dict()
+ for row in keysin:
+ histo[row[-1]] = histo.get(row[-1], 0) + 1
+
+ # return most frequent key
+ return (histo, max(histo.iterkeys(), key=(lambda key: histo[key])))
+
+
+# main function, processes all chords for one song
+def chords2functions(clipin = '1CD0006591_BD11-14',featurefiles = '', constkey = 1):
+
+ # get keys
+ keys = get_features(clipin,'key',featurefiles)
+
+ relchords = []
+ # chords
+ chords = get_features(clipin,'chord',featurefiles)
+ if constkey:
+ # delete 'unknown' keys
+ keys = [(time,knum,key) for (time,knum,key) in keys if not key == '(unknown)']
+
+ # aggregate to one key
+ (histo, skey) = histogram(keys)
+
+ # bet key number
+ (key,mode) = key2num(skey)
+
+ for (time,chord) in chords:
+
+ # get chord function
+ (root,fun,typ, bfun) = chord2function(chord, key,mode)
+
+ # translate into text
+ txt = fun2txt(fun,typ, bfun, mode)
+ #print 'Key: ' + skey + ', chord: ' + chord + ', function: ' + txt
+
+ relchords.append((time,key,mode,fun,typ,bfun))
+ return relchords
+
+def tracks_in_dir(dirin = ''):
+
+ # ---
+ # we now only search for tracks which have chord data
+ # ---
+
+ # data is a dictionary that
+ # for each filename contains the feature
+ # files for chords and keys
+
+ data = dict();
+ # traverse the file structure and get all track names
+ count = 0
+ errcount = 0
+ for (dirpath, dirnames, filenames) in os.walk(dirin):
+ for file in filenames:
+ #print '\rChecked %d files' % (count),
+ count = count + 1
+ if file.endswith(".csv"):
+ # parse filename to get clip_id
+ parsed = pclip.match(file)
+ if parsed:
+ clipin = parsed.group('clipid')
+
+ # initialise dict if necessary
+ if not data.has_key(clipin):
+ data[clipin] = dict()
+
+ # add data to dictionary
+ if parsed.group('type') == (ftype['chord']):
+ data[clipin]['chord'] = os.path.join(dirpath, file).replace('\\', '/')
+ elif parsed.group('type') == (ftype['key']):
+ data[clipin]['key'] = os.path.join(dirpath, file).replace('\\', '/')
+ else:
+ errcount += 1
+ print "Could not parse " + file
+ raise
+ return data
+ # return list of tracknames
+ # return list of feature dirs
+
+
+def fun2txt(fun,typ, bfun,mode):
+ # now we can interpret this function
+ # when given the mode of major or minor.
+ if (fun >= 0):
+ if (mode == 1):
+ pfun = fun_dic_min[fun]
+ md = '(m)'
+ elif (mode == 0):
+ pfun = fun_dic_maj[fun]
+ md = '(M)'
+ else:
+ return 'N'
+
+ #if typ == 'm':
+ # print 'Key: ' + skey + ', chord: ' + chord + ' function ' + str(fun) + ' type ' + typ + ' bfun ' + str(bfun)
+ type = type_labels[typ] if typ > 0 else ''
+
+ blb = '/' + base_labels[bfun] if (bfun >= 0 and base_labels[bfun]) else ''
+ return md + pfun + type + blb
+
+def fun2num(fun,typ, bfun,mode):
+ # now we can interpret this function
+ if not fun == -1:
+ return (mode+1)* 1000000 + (fun+1) * 10000 + (typ+1) * 100 + (bfun+1)
+ else:
+ return 0
+
+def folder2functions(path):
+ tracks = tracks_in_dir(path)
+
+ # get chords for all files
+ #check for integrity: do we have keys and chords?
+ output = dict()
+ bfuns = []
+
+ for clip, featurefiles in tracks.iteritems():
+ print clip
+ if len(featurefiles) == 2:
+ output[clip] = chords2functions(clip,featurefiles)
+ return output
+
+def folder2histogram(path= './'):
+
+ # get chord functions for the folder
+ tracks = folder2functions(path)
+
+ # concatenate string form
+ chords = []
+ for track, contents in tracks.iteritems():
+ for (time,key,mode,fun,typ,bfun) in contents:
+ chords.append([fun2num(fun,typ,bfun,mode)])
+
+ # counts
+ (v,w) = histogram(chords)
+ print v
+ return {"count":v.values(), "index":v.keys()}
+
+if __name__ == "__main__":
+ #chords2functions()
+ print "Creates a key-independent chord histogram. Usage: chord2function path_vamp_chords path_vamp_keys"
+ # sys.argv[1]
+ result = folder2histogram()
+ print "Please input a description for the chord function histogram"
+ c2j.data2json(result)
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/chord_sequence_mining/chordtable_relative.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/chord_sequence_mining/chordtable_relative.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,82 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+__author__="Daniel Wolff"
+
+#Generates look-up table for possible chords relative to
+
+import chord2function as c2f
+import csv
+
+HUMAN_READEABLE = 1;
+
+SAVE_CSV = 1
+SAVE_PICKLE = 0
+SAVE_VOCABULARY = 0
+
+chordmapfilename = 'relative-chordmap.cmap'
+chordmapfilename_csv = 'relative-chordmap.csv'
+chordino_chords_vocabulary = 'relative-chord-vocabulary.txt'
+
+# initialise chord dictionary
+chord_dic = dict();
+
+chord_dic[0] = 'N'
+ # major and minor pieces are separated by chords
+for mode, mode_lbl in enumerate(c2f.mode_lbls):
+ for fun in range(0,len(c2f.root_funs_maj)):
+ for type, type_lbl in enumerate(c2f.type_labels):
+ for base, base_lbl in enumerate(c2f.base_labels):
+ if base_lbl:
+ idx = c2f.fun2num(fun,type, base,mode)
+ if HUMAN_READEABLE:
+ chord_dic[idx] = c2f.fun2txt(fun,type, base,mode)
+ else:
+ chord_dic[idx] = idx
+
+print chord_dic
+T = len(chord_dic)
+
+if SAVE_PICKLE:
+ with open(chordmapfilename,'w') as fh:
+ print "Writing chordmap file."
+ pickle.dump(chord_dic,fh)
+
+if SAVE_CSV:
+ csvfile = open(chordmapfilename_csv, "w+b") #opens the file for updating
+ w = csv.writer(csvfile, delimiter='\t')
+ w.writerow(["chordid"] + ["chordlabel"])
+ for chordid,chordlabel in chord_dic.items():
+ w.writerow(["%s"%chordid] + ["%s"%chordlabel])
+ csvfile.close()
+ print("CSV chord map file %s written."%chordmapfilename_csv)
+
+if SAVE_VOCABULARY:
+ #creates a file with each chords on separate lines to have a vocabulary which can be used with topic modeling techniques
+ #(e.g. turbotopics)
+ csvfile = open(chordino_chords_vocabulary, "w+b") #opens the file for updating
+ w = csv.writer(csvfile, delimiter='\t')
+ for chordid,chordlabel in chord_dic.items():
+ w.writerow(["%s"%chordlabel])
+ csvfile.close()
+ print("Chord vocabulary file %s written."%chordino_chords_vocabulary)
+
+
+
+
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/chord_sequence_mining/relative-chordmap.csv
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/chord_sequence_mining/relative-chordmap.csv Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,2378 @@
+chordid chordlabel
+0 N
+1040401 (M)#IIm/1
+1040403 (M)#IIm/2
+1040404 (M)#IIm/b3
+1040405 (M)#IIm/3
+1040406 (M)#IIm/4
+1040408 (M)#IIm/5
+1040410 (M)#IIm/6
+1040411 (M)#IIm/b7
+1040412 (M)#IIm/7
+2080801 (m)Vm7b5/1
+2080803 (m)Vm7b5/2
+2080804 (m)Vm7b5/b3
+2080805 (m)Vm7b5/3
+2080806 (m)Vm7b5/4
+2080808 (m)Vm7b5/5
+2080810 (m)Vm7b5/6
+2080811 (m)Vm7b5/b7
+2080812 (m)Vm7b5/7
+1040501 (M)#IIm6/1
+1040503 (M)#IIm6/2
+1040504 (M)#IIm6/b3
+1040505 (M)#IIm6/3
+1040506 (M)#IIm6/4
+1040508 (M)#IIm6/5
+1040510 (M)#IIm6/6
+1040511 (M)#IIm6/b7
+1040512 (M)#IIm6/7
+2080901 (m)Vdim/1
+2080903 (m)Vdim/2
+2080904 (m)Vdim/b3
+2080905 (m)Vdim/3
+2080906 (m)Vdim/4
+2080908 (m)Vdim/5
+2080910 (m)Vdim/6
+2080911 (m)Vdim/b7
+2080912 (m)Vdim/7
+1040601 (M)#IIm7/1
+1040603 (M)#IIm7/2
+1040604 (M)#IIm7/b3
+1040605 (M)#IIm7/3
+1040606 (M)#IIm7/4
+1040608 (M)#IIm7/5
+1040610 (M)#IIm7/6
+1040611 (M)#IIm7/b7
+1040612 (M)#IIm7/7
+2081001 (m)Vdim7/1
+2081003 (m)Vdim7/2
+2081004 (m)Vdim7/b3
+2081005 (m)Vdim7/3
+2081006 (m)Vdim7/4
+2081008 (m)Vdim7/5
+2081010 (m)Vdim7/6
+2081011 (m)Vdim7/b7
+2081012 (m)Vdim7/7
+2040101 (m)III/1
+2040103 (m)III/2
+2040104 (m)III/b3
+2040105 (m)III/3
+2040106 (m)III/4
+2040108 (m)III/5
+2040110 (m)III/6
+2040111 (m)III/b7
+2040112 (m)III/7
+1040701 (M)#IImaj7/1
+1040703 (M)#IImaj7/2
+1040704 (M)#IImaj7/b3
+1040705 (M)#IImaj7/3
+1040706 (M)#IImaj7/4
+1040708 (M)#IImaj7/5
+1040710 (M)#IImaj7/6
+1040711 (M)#IImaj7/b7
+1040712 (M)#IImaj7/7
+2081101 (m)Vaug/1
+2081103 (m)Vaug/2
+2081104 (m)Vaug/b3
+2081105 (m)Vaug/3
+2081106 (m)Vaug/4
+2081108 (m)Vaug/5
+2081110 (m)Vaug/6
+2081111 (m)Vaug/b7
+2081112 (m)Vaug/7
+2040201 (m)III6/1
+2040203 (m)III6/2
+2040204 (m)III6/b3
+2040205 (m)III6/3
+2040206 (m)III6/4
+2040208 (m)III6/5
+2040210 (m)III6/6
+2040211 (m)III6/b7
+2040212 (m)III6/7
+1040801 (M)#IIm7b5/1
+1040803 (M)#IIm7b5/2
+1040804 (M)#IIm7b5/b3
+1040805 (M)#IIm7b5/3
+1040806 (M)#IIm7b5/4
+1040808 (M)#IIm7b5/5
+1040810 (M)#IIm7b5/6
+1040811 (M)#IIm7b5/b7
+1040812 (M)#IIm7b5/7
+2040301 (m)III7/1
+2040303 (m)III7/2
+2040304 (m)III7/b3
+2040305 (m)III7/3
+2040306 (m)III7/4
+2040308 (m)III7/5
+2040310 (m)III7/6
+2040311 (m)III7/b7
+2040312 (m)III7/7
+1040901 (M)#IIdim/1
+1040903 (M)#IIdim/2
+1040904 (M)#IIdim/b3
+1040905 (M)#IIdim/3
+1040906 (M)#IIdim/4
+1040908 (M)#IIdim/5
+1040910 (M)#IIdim/6
+1040911 (M)#IIdim/b7
+1040912 (M)#IIdim/7
+1090101 (M)#V/1
+1090103 (M)#V/2
+1090104 (M)#V/b3
+1090105 (M)#V/3
+1090106 (M)#V/4
+1090108 (M)#V/5
+1090110 (M)#V/6
+1090111 (M)#V/b7
+1090112 (M)#V/7
+2040401 (m)IIIm/1
+2040403 (m)IIIm/2
+2040404 (m)IIIm/b3
+2040405 (m)IIIm/3
+2040406 (m)IIIm/4
+2040408 (m)IIIm/5
+2040410 (m)IIIm/6
+2040411 (m)IIIm/b7
+2040412 (m)IIIm/7
+1041001 (M)#IIdim7/1
+1041003 (M)#IIdim7/2
+1041004 (M)#IIdim7/b3
+1041005 (M)#IIdim7/3
+1041006 (M)#IIdim7/4
+1041008 (M)#IIdim7/5
+1041010 (M)#IIdim7/6
+1041011 (M)#IIdim7/b7
+1041012 (M)#IIdim7/7
+1090201 (M)#V6/1
+1090203 (M)#V6/2
+1090204 (M)#V6/b3
+1090205 (M)#V6/3
+1090206 (M)#V6/4
+1090208 (M)#V6/5
+1090210 (M)#V6/6
+1090211 (M)#V6/b7
+1090212 (M)#V6/7
+2040501 (m)IIIm6/1
+2040503 (m)IIIm6/2
+2040504 (m)IIIm6/b3
+2040505 (m)IIIm6/3
+2040506 (m)IIIm6/4
+2040508 (m)IIIm6/5
+2040510 (m)IIIm6/6
+2040511 (m)IIIm6/b7
+2040512 (m)IIIm6/7
+1041101 (M)#IIaug/1
+1041103 (M)#IIaug/2
+1041104 (M)#IIaug/b3
+1041105 (M)#IIaug/3
+1041106 (M)#IIaug/4
+1041108 (M)#IIaug/5
+1041110 (M)#IIaug/6
+1041111 (M)#IIaug/b7
+1041112 (M)#IIaug/7
+1090301 (M)#V7/1
+1090303 (M)#V7/2
+1090304 (M)#V7/b3
+1090305 (M)#V7/3
+1090306 (M)#V7/4
+1090308 (M)#V7/5
+1090310 (M)#V7/6
+1090311 (M)#V7/b7
+1090312 (M)#V7/7
+2040601 (m)IIIm7/1
+2040603 (m)IIIm7/2
+2040604 (m)IIIm7/b3
+2040605 (m)IIIm7/3
+2040606 (m)IIIm7/4
+2040608 (m)IIIm7/5
+2040610 (m)IIIm7/6
+2040611 (m)IIIm7/b7
+2040612 (m)IIIm7/7
+1090401 (M)#Vm/1
+1090403 (M)#Vm/2
+1090404 (M)#Vm/b3
+1090405 (M)#Vm/3
+1090406 (M)#Vm/4
+1090408 (M)#Vm/5
+1090410 (M)#Vm/6
+1090411 (M)#Vm/b7
+1090412 (M)#Vm/7
+2040701 (m)IIImaj7/1
+2040703 (m)IIImaj7/2
+2040704 (m)IIImaj7/b3
+2040705 (m)IIImaj7/3
+2040706 (m)IIImaj7/4
+2040708 (m)IIImaj7/5
+2040710 (m)IIImaj7/6
+2040711 (m)IIImaj7/b7
+2040712 (m)IIImaj7/7
+1090501 (M)#Vm6/1
+1090503 (M)#Vm6/2
+1090504 (M)#Vm6/b3
+1090505 (M)#Vm6/3
+1090506 (M)#Vm6/4
+1090508 (M)#Vm6/5
+1090510 (M)#Vm6/6
+1090511 (M)#Vm6/b7
+1090512 (M)#Vm6/7
+2040801 (m)IIIm7b5/1
+2040803 (m)IIIm7b5/2
+2040804 (m)IIIm7b5/b3
+2040805 (m)IIIm7b5/3
+2040806 (m)IIIm7b5/4
+2040808 (m)IIIm7b5/5
+2040810 (m)IIIm7b5/6
+2040811 (m)IIIm7b5/b7
+2040812 (m)IIIm7b5/7
+1090601 (M)#Vm7/1
+1090603 (M)#Vm7/2
+1090604 (M)#Vm7/b3
+1090605 (M)#Vm7/3
+1090606 (M)#Vm7/4
+1090608 (M)#Vm7/5
+1090610 (M)#Vm7/6
+1090611 (M)#Vm7/b7
+1090612 (M)#Vm7/7
+2040901 (m)IIIdim/1
+2040903 (m)IIIdim/2
+2040904 (m)IIIdim/b3
+2040905 (m)IIIdim/3
+2040906 (m)IIIdim/4
+2040908 (m)IIIdim/5
+2040910 (m)IIIdim/6
+2040911 (m)IIIdim/b7
+2040912 (m)IIIdim/7
+2090101 (m)VI/1
+2090103 (m)VI/2
+2090104 (m)VI/b3
+2090105 (m)VI/3
+2090106 (m)VI/4
+2090108 (m)VI/5
+2090110 (m)VI/6
+2090111 (m)VI/b7
+2090112 (m)VI/7
+1090701 (M)#Vmaj7/1
+1090703 (M)#Vmaj7/2
+1090704 (M)#Vmaj7/b3
+1090705 (M)#Vmaj7/3
+1090706 (M)#Vmaj7/4
+1090708 (M)#Vmaj7/5
+1090710 (M)#Vmaj7/6
+1090711 (M)#Vmaj7/b7
+1090712 (M)#Vmaj7/7
+2041001 (m)IIIdim7/1
+2041003 (m)IIIdim7/2
+2041004 (m)IIIdim7/b3
+2041005 (m)IIIdim7/3
+2041006 (m)IIIdim7/4
+2041008 (m)IIIdim7/5
+2041010 (m)IIIdim7/6
+2041011 (m)IIIdim7/b7
+2041012 (m)IIIdim7/7
+2090201 (m)VI6/1
+2090203 (m)VI6/2
+2090204 (m)VI6/b3
+2090205 (m)VI6/3
+2090206 (m)VI6/4
+2090208 (m)VI6/5
+2090210 (m)VI6/6
+2090211 (m)VI6/b7
+2090212 (m)VI6/7
+1090801 (M)#Vm7b5/1
+1090803 (M)#Vm7b5/2
+1090804 (M)#Vm7b5/b3
+1090805 (M)#Vm7b5/3
+1090806 (M)#Vm7b5/4
+1090808 (M)#Vm7b5/5
+1090810 (M)#Vm7b5/6
+1090811 (M)#Vm7b5/b7
+1090812 (M)#Vm7b5/7
+2041101 (m)IIIaug/1
+2041103 (m)IIIaug/2
+2041104 (m)IIIaug/b3
+2041105 (m)IIIaug/3
+2041106 (m)IIIaug/4
+2041108 (m)IIIaug/5
+2041110 (m)IIIaug/6
+2041111 (m)IIIaug/b7
+2041112 (m)IIIaug/7
+2090301 (m)VI7/1
+2090303 (m)VI7/2
+2090304 (m)VI7/b3
+2090305 (m)VI7/3
+2090306 (m)VI7/4
+2090308 (m)VI7/5
+2090310 (m)VI7/6
+2090311 (m)VI7/b7
+2090312 (m)VI7/7
+1090901 (M)#Vdim/1
+1090903 (M)#Vdim/2
+1090904 (M)#Vdim/b3
+1090905 (M)#Vdim/3
+1090906 (M)#Vdim/4
+1090908 (M)#Vdim/5
+1090910 (M)#Vdim/6
+1090911 (M)#Vdim/b7
+1090912 (M)#Vdim/7
+2090401 (m)VIm/1
+2090403 (m)VIm/2
+2090404 (m)VIm/b3
+2090405 (m)VIm/3
+2090406 (m)VIm/4
+2090408 (m)VIm/5
+2090410 (m)VIm/6
+2090411 (m)VIm/b7
+2090412 (m)VIm/7
+1091001 (M)#Vdim7/1
+1091003 (M)#Vdim7/2
+1091004 (M)#Vdim7/b3
+1091005 (M)#Vdim7/3
+1091006 (M)#Vdim7/4
+1091008 (M)#Vdim7/5
+1091010 (M)#Vdim7/6
+1091011 (M)#Vdim7/b7
+1091012 (M)#Vdim7/7
+1050101 (M)III/1
+1050103 (M)III/2
+1050104 (M)III/b3
+1050105 (M)III/3
+1050106 (M)III/4
+1050108 (M)III/5
+1050110 (M)III/6
+1050111 (M)III/b7
+1050112 (M)III/7
+2090501 (m)VIm6/1
+2090503 (m)VIm6/2
+2090504 (m)VIm6/b3
+2090505 (m)VIm6/3
+2090506 (m)VIm6/4
+2090508 (m)VIm6/5
+2090510 (m)VIm6/6
+2090511 (m)VIm6/b7
+2090512 (m)VIm6/7
+1091101 (M)#Vaug/1
+1091103 (M)#Vaug/2
+1091104 (M)#Vaug/b3
+1091105 (M)#Vaug/3
+1091106 (M)#Vaug/4
+1091108 (M)#Vaug/5
+1091110 (M)#Vaug/6
+1091111 (M)#Vaug/b7
+1091112 (M)#Vaug/7
+1050201 (M)III6/1
+1050203 (M)III6/2
+1050204 (M)III6/b3
+1050205 (M)III6/3
+1050206 (M)III6/4
+1050208 (M)III6/5
+1050210 (M)III6/6
+1050211 (M)III6/b7
+1050212 (M)III6/7
+2090601 (m)VIm7/1
+2090603 (m)VIm7/2
+2090604 (m)VIm7/b3
+2090605 (m)VIm7/3
+2090606 (m)VIm7/4
+2090608 (m)VIm7/5
+2090610 (m)VIm7/6
+2090611 (m)VIm7/b7
+2090612 (m)VIm7/7
+1050301 (M)III7/1
+1050303 (M)III7/2
+1050304 (M)III7/b3
+1050305 (M)III7/3
+1050306 (M)III7/4
+1050308 (M)III7/5
+1050310 (M)III7/6
+1050311 (M)III7/b7
+1050312 (M)III7/7
+2090701 (m)VImaj7/1
+2090703 (m)VImaj7/2
+2090704 (m)VImaj7/b3
+2090705 (m)VImaj7/3
+2090706 (m)VImaj7/4
+2090708 (m)VImaj7/5
+2090710 (m)VImaj7/6
+2090711 (m)VImaj7/b7
+2090712 (m)VImaj7/7
+1050401 (M)IIIm/1
+1050403 (M)IIIm/2
+1050404 (M)IIIm/b3
+1050405 (M)IIIm/3
+1050406 (M)IIIm/4
+1050408 (M)IIIm/5
+1050410 (M)IIIm/6
+1050411 (M)IIIm/b7
+1050412 (M)IIIm/7
+2090801 (m)VIm7b5/1
+2090803 (m)VIm7b5/2
+2090804 (m)VIm7b5/b3
+2090805 (m)VIm7b5/3
+2090806 (m)VIm7b5/4
+2090808 (m)VIm7b5/5
+2090810 (m)VIm7b5/6
+2090811 (m)VIm7b5/b7
+2090812 (m)VIm7b5/7
+1050501 (M)IIIm6/1
+1050503 (M)IIIm6/2
+1050504 (M)IIIm6/b3
+1050505 (M)IIIm6/3
+1050506 (M)IIIm6/4
+1050508 (M)IIIm6/5
+1050510 (M)IIIm6/6
+1050511 (M)IIIm6/b7
+1050512 (M)IIIm6/7
+2090901 (m)VIdim/1
+2090903 (m)VIdim/2
+2090904 (m)VIdim/b3
+2090905 (m)VIdim/3
+2090906 (m)VIdim/4
+2090908 (m)VIdim/5
+2090910 (m)VIdim/6
+2090911 (m)VIdim/b7
+2090912 (m)VIdim/7
+1050601 (M)IIIm7/1
+1050603 (M)IIIm7/2
+1050604 (M)IIIm7/b3
+1050605 (M)IIIm7/3
+1050606 (M)IIIm7/4
+1050608 (M)IIIm7/5
+1050610 (M)IIIm7/6
+1050611 (M)IIIm7/b7
+1050612 (M)IIIm7/7
+2091001 (m)VIdim7/1
+2091003 (m)VIdim7/2
+2091004 (m)VIdim7/b3
+2091005 (m)VIdim7/3
+2091006 (m)VIdim7/4
+2091008 (m)VIdim7/5
+2091010 (m)VIdim7/6
+2091011 (m)VIdim7/b7
+2091012 (m)VIdim7/7
+2050101 (m)#III/1
+2050103 (m)#III/2
+2050104 (m)#III/b3
+2050105 (m)#III/3
+2050106 (m)#III/4
+2050108 (m)#III/5
+2050110 (m)#III/6
+2050111 (m)#III/b7
+2050112 (m)#III/7
+1050701 (M)IIImaj7/1
+1050703 (M)IIImaj7/2
+1050704 (M)IIImaj7/b3
+1050705 (M)IIImaj7/3
+1050706 (M)IIImaj7/4
+1050708 (M)IIImaj7/5
+1050710 (M)IIImaj7/6
+1050711 (M)IIImaj7/b7
+1050712 (M)IIImaj7/7
+2091101 (m)VIaug/1
+2091103 (m)VIaug/2
+2091104 (m)VIaug/b3
+2091105 (m)VIaug/3
+2091106 (m)VIaug/4
+2091108 (m)VIaug/5
+2091110 (m)VIaug/6
+2091111 (m)VIaug/b7
+2091112 (m)VIaug/7
+2050201 (m)#III6/1
+2050203 (m)#III6/2
+2050204 (m)#III6/b3
+2050205 (m)#III6/3
+2050206 (m)#III6/4
+2050208 (m)#III6/5
+2050210 (m)#III6/6
+2050211 (m)#III6/b7
+2050212 (m)#III6/7
+1050801 (M)IIIm7b5/1
+1050803 (M)IIIm7b5/2
+1050804 (M)IIIm7b5/b3
+1050805 (M)IIIm7b5/3
+1050806 (M)IIIm7b5/4
+1050808 (M)IIIm7b5/5
+1050810 (M)IIIm7b5/6
+1050811 (M)IIIm7b5/b7
+1050812 (M)IIIm7b5/7
+2050301 (m)#III7/1
+2050303 (m)#III7/2
+2050304 (m)#III7/b3
+2050305 (m)#III7/3
+2050306 (m)#III7/4
+2050308 (m)#III7/5
+2050310 (m)#III7/6
+2050311 (m)#III7/b7
+2050312 (m)#III7/7
+1050901 (M)IIIdim/1
+1050903 (M)IIIdim/2
+1050904 (M)IIIdim/b3
+1050905 (M)IIIdim/3
+1050906 (M)IIIdim/4
+1050908 (M)IIIdim/5
+1050910 (M)IIIdim/6
+1050911 (M)IIIdim/b7
+1050912 (M)IIIdim/7
+1100101 (M)VI/1
+1100103 (M)VI/2
+1100104 (M)VI/b3
+1100105 (M)VI/3
+1100106 (M)VI/4
+1100108 (M)VI/5
+1100110 (M)VI/6
+1100111 (M)VI/b7
+1100112 (M)VI/7
+2050401 (m)#IIIm/1
+2050403 (m)#IIIm/2
+2050404 (m)#IIIm/b3
+2050405 (m)#IIIm/3
+2050406 (m)#IIIm/4
+2050408 (m)#IIIm/5
+2050410 (m)#IIIm/6
+2050411 (m)#IIIm/b7
+2050412 (m)#IIIm/7
+1051001 (M)IIIdim7/1
+1051003 (M)IIIdim7/2
+1051004 (M)IIIdim7/b3
+1051005 (M)IIIdim7/3
+1051006 (M)IIIdim7/4
+1051008 (M)IIIdim7/5
+1051010 (M)IIIdim7/6
+1051011 (M)IIIdim7/b7
+1051012 (M)IIIdim7/7
+1100201 (M)VI6/1
+1100203 (M)VI6/2
+1100204 (M)VI6/b3
+1100205 (M)VI6/3
+1100206 (M)VI6/4
+1100208 (M)VI6/5
+1100210 (M)VI6/6
+1100211 (M)VI6/b7
+1100212 (M)VI6/7
+1010101 (M)I/1
+1010103 (M)I/2
+1010104 (M)I/b3
+1010105 (M)I/3
+1010106 (M)I/4
+1010108 (M)I/5
+1010110 (M)I/6
+1010111 (M)I/b7
+1010112 (M)I/7
+2050501 (m)#IIIm6/1
+2050503 (m)#IIIm6/2
+2050504 (m)#IIIm6/b3
+2050505 (m)#IIIm6/3
+2050506 (m)#IIIm6/4
+2050508 (m)#IIIm6/5
+2050510 (m)#IIIm6/6
+2050511 (m)#IIIm6/b7
+2050512 (m)#IIIm6/7
+1051101 (M)IIIaug/1
+1051103 (M)IIIaug/2
+1051104 (M)IIIaug/b3
+1051105 (M)IIIaug/3
+1051106 (M)IIIaug/4
+1051108 (M)IIIaug/5
+1051110 (M)IIIaug/6
+1051111 (M)IIIaug/b7
+1051112 (M)IIIaug/7
+1100301 (M)VI7/1
+1100303 (M)VI7/2
+1100304 (M)VI7/b3
+1100305 (M)VI7/3
+1100306 (M)VI7/4
+1100308 (M)VI7/5
+1100310 (M)VI7/6
+1100311 (M)VI7/b7
+1100312 (M)VI7/7
+1010201 (M)I6/1
+1010203 (M)I6/2
+1010204 (M)I6/b3
+1010205 (M)I6/3
+1010206 (M)I6/4
+1010208 (M)I6/5
+1010210 (M)I6/6
+1010211 (M)I6/b7
+1010212 (M)I6/7
+2050601 (m)#IIIm7/1
+2050603 (m)#IIIm7/2
+2050604 (m)#IIIm7/b3
+2050605 (m)#IIIm7/3
+2050606 (m)#IIIm7/4
+2050608 (m)#IIIm7/5
+2050610 (m)#IIIm7/6
+2050611 (m)#IIIm7/b7
+2050612 (m)#IIIm7/7
+1100401 (M)VIm/1
+1100403 (M)VIm/2
+1100404 (M)VIm/b3
+1100405 (M)VIm/3
+1100406 (M)VIm/4
+1100408 (M)VIm/5
+1100410 (M)VIm/6
+1100411 (M)VIm/b7
+1100412 (M)VIm/7
+1010301 (M)I7/1
+1010303 (M)I7/2
+1010304 (M)I7/b3
+1010305 (M)I7/3
+1010306 (M)I7/4
+1010308 (M)I7/5
+1010310 (M)I7/6
+1010311 (M)I7/b7
+1010312 (M)I7/7
+2050701 (m)#IIImaj7/1
+2050703 (m)#IIImaj7/2
+2050704 (m)#IIImaj7/b3
+2050705 (m)#IIImaj7/3
+2050706 (m)#IIImaj7/4
+2050708 (m)#IIImaj7/5
+2050710 (m)#IIImaj7/6
+2050711 (m)#IIImaj7/b7
+2050712 (m)#IIImaj7/7
+1100501 (M)VIm6/1
+1100503 (M)VIm6/2
+1100504 (M)VIm6/b3
+1100505 (M)VIm6/3
+1100506 (M)VIm6/4
+1100508 (M)VIm6/5
+1100510 (M)VIm6/6
+1100511 (M)VIm6/b7
+1100512 (M)VIm6/7
+1010401 (M)Im/1
+1010403 (M)Im/2
+1010404 (M)Im/b3
+1010405 (M)Im/3
+1010406 (M)Im/4
+1010408 (M)Im/5
+1010410 (M)Im/6
+1010411 (M)Im/b7
+1010412 (M)Im/7
+2050801 (m)#IIIm7b5/1
+2050803 (m)#IIIm7b5/2
+2050804 (m)#IIIm7b5/b3
+2050805 (m)#IIIm7b5/3
+2050806 (m)#IIIm7b5/4
+2050808 (m)#IIIm7b5/5
+2050810 (m)#IIIm7b5/6
+2050811 (m)#IIIm7b5/b7
+2050812 (m)#IIIm7b5/7
+1100601 (M)VIm7/1
+1100603 (M)VIm7/2
+1100604 (M)VIm7/b3
+1100605 (M)VIm7/3
+1100606 (M)VIm7/4
+1100608 (M)VIm7/5
+1100610 (M)VIm7/6
+1100611 (M)VIm7/b7
+1100612 (M)VIm7/7
+1010501 (M)Im6/1
+1010503 (M)Im6/2
+1010504 (M)Im6/b3
+1010505 (M)Im6/3
+1010506 (M)Im6/4
+1010508 (M)Im6/5
+1010510 (M)Im6/6
+1010511 (M)Im6/b7
+1010512 (M)Im6/7
+2050901 (m)#IIIdim/1
+2050903 (m)#IIIdim/2
+2050904 (m)#IIIdim/b3
+2050905 (m)#IIIdim/3
+2050906 (m)#IIIdim/4
+2050908 (m)#IIIdim/5
+2050910 (m)#IIIdim/6
+2050911 (m)#IIIdim/b7
+2050912 (m)#IIIdim/7
+2100101 (m)#VI/1
+2100103 (m)#VI/2
+2100104 (m)#VI/b3
+2100105 (m)#VI/3
+2100106 (m)#VI/4
+2100108 (m)#VI/5
+2100110 (m)#VI/6
+2100111 (m)#VI/b7
+2100112 (m)#VI/7
+1100701 (M)VImaj7/1
+1100703 (M)VImaj7/2
+1100704 (M)VImaj7/b3
+1100705 (M)VImaj7/3
+1100706 (M)VImaj7/4
+1100708 (M)VImaj7/5
+1100710 (M)VImaj7/6
+1100711 (M)VImaj7/b7
+1100712 (M)VImaj7/7
+1010601 (M)Im7/1
+1010603 (M)Im7/2
+1010604 (M)Im7/b3
+1010605 (M)Im7/3
+1010606 (M)Im7/4
+1010608 (M)Im7/5
+1010610 (M)Im7/6
+1010611 (M)Im7/b7
+1010612 (M)Im7/7
+2051001 (m)#IIIdim7/1
+2051003 (m)#IIIdim7/2
+2051004 (m)#IIIdim7/b3
+2051005 (m)#IIIdim7/3
+2051006 (m)#IIIdim7/4
+2051008 (m)#IIIdim7/5
+2051010 (m)#IIIdim7/6
+2051011 (m)#IIIdim7/b7
+2051012 (m)#IIIdim7/7
+2100201 (m)#VI6/1
+2100203 (m)#VI6/2
+2100204 (m)#VI6/b3
+2100205 (m)#VI6/3
+2100206 (m)#VI6/4
+2100208 (m)#VI6/5
+2100210 (m)#VI6/6
+2100211 (m)#VI6/b7
+2100212 (m)#VI6/7
+2010101 (m)I/1
+2010103 (m)I/2
+2010104 (m)I/b3
+2010105 (m)I/3
+2010106 (m)I/4
+2010108 (m)I/5
+2010110 (m)I/6
+2010111 (m)I/b7
+2010112 (m)I/7
+1100801 (M)VIm7b5/1
+1100803 (M)VIm7b5/2
+1100804 (M)VIm7b5/b3
+1100805 (M)VIm7b5/3
+1100806 (M)VIm7b5/4
+1100808 (M)VIm7b5/5
+1100810 (M)VIm7b5/6
+1100811 (M)VIm7b5/b7
+1100812 (M)VIm7b5/7
+1010701 (M)Imaj7/1
+1010703 (M)Imaj7/2
+1010704 (M)Imaj7/b3
+1010705 (M)Imaj7/3
+1010706 (M)Imaj7/4
+1010708 (M)Imaj7/5
+1010710 (M)Imaj7/6
+1010711 (M)Imaj7/b7
+1010712 (M)Imaj7/7
+2051101 (m)#IIIaug/1
+2051103 (m)#IIIaug/2
+2051104 (m)#IIIaug/b3
+2051105 (m)#IIIaug/3
+2051106 (m)#IIIaug/4
+2051108 (m)#IIIaug/5
+2051110 (m)#IIIaug/6
+2051111 (m)#IIIaug/b7
+2051112 (m)#IIIaug/7
+2100301 (m)#VI7/1
+2100303 (m)#VI7/2
+2100304 (m)#VI7/b3
+2100305 (m)#VI7/3
+2100306 (m)#VI7/4
+2100308 (m)#VI7/5
+2100310 (m)#VI7/6
+2100311 (m)#VI7/b7
+2100312 (m)#VI7/7
+2010201 (m)I6/1
+2010203 (m)I6/2
+2010204 (m)I6/b3
+2010205 (m)I6/3
+2010206 (m)I6/4
+2010208 (m)I6/5
+2010210 (m)I6/6
+2010211 (m)I6/b7
+2010212 (m)I6/7
+1100901 (M)VIdim/1
+1100903 (M)VIdim/2
+1100904 (M)VIdim/b3
+1100905 (M)VIdim/3
+1100906 (M)VIdim/4
+1100908 (M)VIdim/5
+1100910 (M)VIdim/6
+1100911 (M)VIdim/b7
+1100912 (M)VIdim/7
+1010801 (M)Im7b5/1
+1010803 (M)Im7b5/2
+1010804 (M)Im7b5/b3
+1010805 (M)Im7b5/3
+1010806 (M)Im7b5/4
+1010808 (M)Im7b5/5
+1010810 (M)Im7b5/6
+1010811 (M)Im7b5/b7
+1010812 (M)Im7b5/7
+2100401 (m)#VIm/1
+2100403 (m)#VIm/2
+2100404 (m)#VIm/b3
+2100405 (m)#VIm/3
+2100406 (m)#VIm/4
+2100408 (m)#VIm/5
+2100410 (m)#VIm/6
+2100411 (m)#VIm/b7
+2100412 (m)#VIm/7
+2010301 (m)I7/1
+2010303 (m)I7/2
+2010304 (m)I7/b3
+2010305 (m)I7/3
+2010306 (m)I7/4
+2010308 (m)I7/5
+2010310 (m)I7/6
+2010311 (m)I7/b7
+2010312 (m)I7/7
+1101001 (M)VIdim7/1
+1101003 (M)VIdim7/2
+1101004 (M)VIdim7/b3
+1101005 (M)VIdim7/3
+1101006 (M)VIdim7/4
+1101008 (M)VIdim7/5
+1101010 (M)VIdim7/6
+1101011 (M)VIdim7/b7
+1101012 (M)VIdim7/7
+1010901 (M)Idim/1
+1010903 (M)Idim/2
+1010904 (M)Idim/b3
+1010905 (M)Idim/3
+1010906 (M)Idim/4
+1010908 (M)Idim/5
+1010910 (M)Idim/6
+1010911 (M)Idim/b7
+1010912 (M)Idim/7
+1060101 (M)IV/1
+1060103 (M)IV/2
+1060104 (M)IV/b3
+1060105 (M)IV/3
+1060106 (M)IV/4
+1060108 (M)IV/5
+1060110 (M)IV/6
+1060111 (M)IV/b7
+1060112 (M)IV/7
+2100501 (m)#VIm6/1
+2100503 (m)#VIm6/2
+2100504 (m)#VIm6/b3
+2100505 (m)#VIm6/3
+2100506 (m)#VIm6/4
+2100508 (m)#VIm6/5
+2100510 (m)#VIm6/6
+2100511 (m)#VIm6/b7
+2100512 (m)#VIm6/7
+2010401 (m)Im/1
+2010403 (m)Im/2
+2010404 (m)Im/b3
+2010405 (m)Im/3
+2010406 (m)Im/4
+2010408 (m)Im/5
+2010410 (m)Im/6
+2010411 (m)Im/b7
+2010412 (m)Im/7
+1101101 (M)VIaug/1
+1101103 (M)VIaug/2
+1101104 (M)VIaug/b3
+1101105 (M)VIaug/3
+1101106 (M)VIaug/4
+1101108 (M)VIaug/5
+1101110 (M)VIaug/6
+1101111 (M)VIaug/b7
+1101112 (M)VIaug/7
+1011001 (M)Idim7/1
+1011003 (M)Idim7/2
+1011004 (M)Idim7/b3
+1011005 (M)Idim7/3
+1011006 (M)Idim7/4
+1011008 (M)Idim7/5
+1011010 (M)Idim7/6
+1011011 (M)Idim7/b7
+1011012 (M)Idim7/7
+1060201 (M)IV6/1
+1060203 (M)IV6/2
+1060204 (M)IV6/b3
+1060205 (M)IV6/3
+1060206 (M)IV6/4
+1060208 (M)IV6/5
+1060210 (M)IV6/6
+1060211 (M)IV6/b7
+1060212 (M)IV6/7
+2100601 (m)#VIm7/1
+2100603 (m)#VIm7/2
+2100604 (m)#VIm7/b3
+2100605 (m)#VIm7/3
+2100606 (m)#VIm7/4
+2100608 (m)#VIm7/5
+2100610 (m)#VIm7/6
+2100611 (m)#VIm7/b7
+2100612 (m)#VIm7/7
+2010501 (m)Im6/1
+2010503 (m)Im6/2
+2010504 (m)Im6/b3
+2010505 (m)Im6/3
+2010506 (m)Im6/4
+2010508 (m)Im6/5
+2010510 (m)Im6/6
+2010511 (m)Im6/b7
+2010512 (m)Im6/7
+1011101 (M)Iaug/1
+1011103 (M)Iaug/2
+1011104 (M)Iaug/b3
+1011105 (M)Iaug/3
+1011106 (M)Iaug/4
+1011108 (M)Iaug/5
+1011110 (M)Iaug/6
+1011111 (M)Iaug/b7
+1011112 (M)Iaug/7
+1060301 (M)IV7/1
+1060303 (M)IV7/2
+1060304 (M)IV7/b3
+1060305 (M)IV7/3
+1060306 (M)IV7/4
+1060308 (M)IV7/5
+1060310 (M)IV7/6
+1060311 (M)IV7/b7
+1060312 (M)IV7/7
+2100701 (m)#VImaj7/1
+2100703 (m)#VImaj7/2
+2100704 (m)#VImaj7/b3
+2100705 (m)#VImaj7/3
+2100706 (m)#VImaj7/4
+2100708 (m)#VImaj7/5
+2100710 (m)#VImaj7/6
+2100711 (m)#VImaj7/b7
+2100712 (m)#VImaj7/7
+2010601 (m)Im7/1
+2010603 (m)Im7/2
+2010604 (m)Im7/b3
+2010605 (m)Im7/3
+2010606 (m)Im7/4
+2010608 (m)Im7/5
+2010610 (m)Im7/6
+2010611 (m)Im7/b7
+2010612 (m)Im7/7
+1060401 (M)IVm/1
+1060403 (M)IVm/2
+1060404 (M)IVm/b3
+1060405 (M)IVm/3
+1060406 (M)IVm/4
+1060408 (M)IVm/5
+1060410 (M)IVm/6
+1060411 (M)IVm/b7
+1060412 (M)IVm/7
+2100801 (m)#VIm7b5/1
+2100803 (m)#VIm7b5/2
+2100804 (m)#VIm7b5/b3
+2100805 (m)#VIm7b5/3
+2100806 (m)#VIm7b5/4
+2100808 (m)#VIm7b5/5
+2100810 (m)#VIm7b5/6
+2100811 (m)#VIm7b5/b7
+2100812 (m)#VIm7b5/7
+2010701 (m)Imaj7/1
+2010703 (m)Imaj7/2
+2010704 (m)Imaj7/b3
+2010705 (m)Imaj7/3
+2010706 (m)Imaj7/4
+2010708 (m)Imaj7/5
+2010710 (m)Imaj7/6
+2010711 (m)Imaj7/b7
+2010712 (m)Imaj7/7
+1060501 (M)IVm6/1
+1060503 (M)IVm6/2
+1060504 (M)IVm6/b3
+1060505 (M)IVm6/3
+1060506 (M)IVm6/4
+1060508 (M)IVm6/5
+1060510 (M)IVm6/6
+1060511 (M)IVm6/b7
+1060512 (M)IVm6/7
+2100901 (m)#VIdim/1
+2100903 (m)#VIdim/2
+2100904 (m)#VIdim/b3
+2100905 (m)#VIdim/3
+2100906 (m)#VIdim/4
+2100908 (m)#VIdim/5
+2100910 (m)#VIdim/6
+2100911 (m)#VIdim/b7
+2100912 (m)#VIdim/7
+2010801 (m)Im7b5/1
+2010803 (m)Im7b5/2
+2010804 (m)Im7b5/b3
+2010805 (m)Im7b5/3
+2010806 (m)Im7b5/4
+2010808 (m)Im7b5/5
+2010810 (m)Im7b5/6
+2010811 (m)Im7b5/b7
+2010812 (m)Im7b5/7
+1060601 (M)IVm7/1
+1060603 (M)IVm7/2
+1060604 (M)IVm7/b3
+1060605 (M)IVm7/3
+1060606 (M)IVm7/4
+1060608 (M)IVm7/5
+1060610 (M)IVm7/6
+1060611 (M)IVm7/b7
+1060612 (M)IVm7/7
+2101001 (m)#VIdim7/1
+2101003 (m)#VIdim7/2
+2101004 (m)#VIdim7/b3
+2101005 (m)#VIdim7/3
+2101006 (m)#VIdim7/4
+2101008 (m)#VIdim7/5
+2101010 (m)#VIdim7/6
+2101011 (m)#VIdim7/b7
+2101012 (m)#VIdim7/7
+2010901 (m)Idim/1
+2010903 (m)Idim/2
+2010904 (m)Idim/b3
+2010905 (m)Idim/3
+2010906 (m)Idim/4
+2010908 (m)Idim/5
+2010910 (m)Idim/6
+2010911 (m)Idim/b7
+2010912 (m)Idim/7
+2060101 (m)IV/1
+2060103 (m)IV/2
+2060104 (m)IV/b3
+2060105 (m)IV/3
+2060106 (m)IV/4
+2060108 (m)IV/5
+2060110 (m)IV/6
+2060111 (m)IV/b7
+2060112 (m)IV/7
+1060701 (M)IVmaj7/1
+1060703 (M)IVmaj7/2
+1060704 (M)IVmaj7/b3
+1060705 (M)IVmaj7/3
+1060706 (M)IVmaj7/4
+1060708 (M)IVmaj7/5
+1060710 (M)IVmaj7/6
+1060711 (M)IVmaj7/b7
+1060712 (M)IVmaj7/7
+2101101 (m)#VIaug/1
+2101103 (m)#VIaug/2
+2101104 (m)#VIaug/b3
+2101105 (m)#VIaug/3
+2101106 (m)#VIaug/4
+2101108 (m)#VIaug/5
+2101110 (m)#VIaug/6
+2101111 (m)#VIaug/b7
+2101112 (m)#VIaug/7
+2011001 (m)Idim7/1
+2011003 (m)Idim7/2
+2011004 (m)Idim7/b3
+2011005 (m)Idim7/3
+2011006 (m)Idim7/4
+2011008 (m)Idim7/5
+2011010 (m)Idim7/6
+2011011 (m)Idim7/b7
+2011012 (m)Idim7/7
+2060201 (m)IV6/1
+2060203 (m)IV6/2
+2060204 (m)IV6/b3
+2060205 (m)IV6/3
+2060206 (m)IV6/4
+2060208 (m)IV6/5
+2060210 (m)IV6/6
+2060211 (m)IV6/b7
+2060212 (m)IV6/7
+1060801 (M)IVm7b5/1
+1060803 (M)IVm7b5/2
+1060804 (M)IVm7b5/b3
+1060805 (M)IVm7b5/3
+1060806 (M)IVm7b5/4
+1060808 (M)IVm7b5/5
+1060810 (M)IVm7b5/6
+1060811 (M)IVm7b5/b7
+1060812 (M)IVm7b5/7
+2011101 (m)Iaug/1
+2011103 (m)Iaug/2
+2011104 (m)Iaug/b3
+2011105 (m)Iaug/3
+2011106 (m)Iaug/4
+2011108 (m)Iaug/5
+2011110 (m)Iaug/6
+2011111 (m)Iaug/b7
+2011112 (m)Iaug/7
+2060301 (m)IV7/1
+2060303 (m)IV7/2
+2060304 (m)IV7/b3
+2060305 (m)IV7/3
+2060306 (m)IV7/4
+2060308 (m)IV7/5
+2060310 (m)IV7/6
+2060311 (m)IV7/b7
+2060312 (m)IV7/7
+1060901 (M)IVdim/1
+1060903 (M)IVdim/2
+1060904 (M)IVdim/b3
+1060905 (M)IVdim/3
+1060906 (M)IVdim/4
+1060908 (M)IVdim/5
+1060910 (M)IVdim/6
+1060911 (M)IVdim/b7
+1060912 (M)IVdim/7
+1110101 (M)#VI/1
+1110103 (M)#VI/2
+1110104 (M)#VI/b3
+1110105 (M)#VI/3
+1110106 (M)#VI/4
+1110108 (M)#VI/5
+1110110 (M)#VI/6
+1110111 (M)#VI/b7
+1110112 (M)#VI/7
+2060401 (m)IVm/1
+2060403 (m)IVm/2
+2060404 (m)IVm/b3
+2060405 (m)IVm/3
+2060406 (m)IVm/4
+2060408 (m)IVm/5
+2060410 (m)IVm/6
+2060411 (m)IVm/b7
+2060412 (m)IVm/7
+1061001 (M)IVdim7/1
+1061003 (M)IVdim7/2
+1061004 (M)IVdim7/b3
+1061005 (M)IVdim7/3
+1061006 (M)IVdim7/4
+1061008 (M)IVdim7/5
+1061010 (M)IVdim7/6
+1061011 (M)IVdim7/b7
+1061012 (M)IVdim7/7
+1110201 (M)#VI6/1
+1110203 (M)#VI6/2
+1110204 (M)#VI6/b3
+1110205 (M)#VI6/3
+1110206 (M)#VI6/4
+1110208 (M)#VI6/5
+1110210 (M)#VI6/6
+1110211 (M)#VI6/b7
+1110212 (M)#VI6/7
+1020101 (M)#I/1
+1020103 (M)#I/2
+1020104 (M)#I/b3
+1020105 (M)#I/3
+1020106 (M)#I/4
+1020108 (M)#I/5
+1020110 (M)#I/6
+1020111 (M)#I/b7
+1020112 (M)#I/7
+2060501 (m)IVm6/1
+2060503 (m)IVm6/2
+2060504 (m)IVm6/b3
+2060505 (m)IVm6/3
+2060506 (m)IVm6/4
+2060508 (m)IVm6/5
+2060510 (m)IVm6/6
+2060511 (m)IVm6/b7
+2060512 (m)IVm6/7
+1061101 (M)IVaug/1
+1061103 (M)IVaug/2
+1061104 (M)IVaug/b3
+1061105 (M)IVaug/3
+1061106 (M)IVaug/4
+1061108 (M)IVaug/5
+1061110 (M)IVaug/6
+1061111 (M)IVaug/b7
+1061112 (M)IVaug/7
+1110301 (M)#VI7/1
+1110303 (M)#VI7/2
+1110304 (M)#VI7/b3
+1110305 (M)#VI7/3
+1110306 (M)#VI7/4
+1110308 (M)#VI7/5
+1110310 (M)#VI7/6
+1110311 (M)#VI7/b7
+1110312 (M)#VI7/7
+1020201 (M)#I6/1
+1020203 (M)#I6/2
+1020204 (M)#I6/b3
+1020205 (M)#I6/3
+1020206 (M)#I6/4
+1020208 (M)#I6/5
+1020210 (M)#I6/6
+1020211 (M)#I6/b7
+1020212 (M)#I6/7
+2060601 (m)IVm7/1
+2060603 (m)IVm7/2
+2060604 (m)IVm7/b3
+2060605 (m)IVm7/3
+2060606 (m)IVm7/4
+2060608 (m)IVm7/5
+2060610 (m)IVm7/6
+2060611 (m)IVm7/b7
+2060612 (m)IVm7/7
+1110401 (M)#VIm/1
+1110403 (M)#VIm/2
+1110404 (M)#VIm/b3
+1110405 (M)#VIm/3
+1110406 (M)#VIm/4
+1110408 (M)#VIm/5
+1110410 (M)#VIm/6
+1110411 (M)#VIm/b7
+1110412 (M)#VIm/7
+1020301 (M)#I7/1
+1020303 (M)#I7/2
+1020304 (M)#I7/b3
+1020305 (M)#I7/3
+1020306 (M)#I7/4
+1020308 (M)#I7/5
+1020310 (M)#I7/6
+1020311 (M)#I7/b7
+1020312 (M)#I7/7
+2060701 (m)IVmaj7/1
+2060703 (m)IVmaj7/2
+2060704 (m)IVmaj7/b3
+2060705 (m)IVmaj7/3
+2060706 (m)IVmaj7/4
+2060708 (m)IVmaj7/5
+2060710 (m)IVmaj7/6
+2060711 (m)IVmaj7/b7
+2060712 (m)IVmaj7/7
+1110501 (M)#VIm6/1
+1110503 (M)#VIm6/2
+1110504 (M)#VIm6/b3
+1110505 (M)#VIm6/3
+1110506 (M)#VIm6/4
+1110508 (M)#VIm6/5
+1110510 (M)#VIm6/6
+1110511 (M)#VIm6/b7
+1110512 (M)#VIm6/7
+1020401 (M)#Im/1
+1020403 (M)#Im/2
+1020404 (M)#Im/b3
+1020405 (M)#Im/3
+1020406 (M)#Im/4
+1020408 (M)#Im/5
+1020410 (M)#Im/6
+1020411 (M)#Im/b7
+1020412 (M)#Im/7
+2060801 (m)IVm7b5/1
+2060803 (m)IVm7b5/2
+2060804 (m)IVm7b5/b3
+2060805 (m)IVm7b5/3
+2060806 (m)IVm7b5/4
+2060808 (m)IVm7b5/5
+2060810 (m)IVm7b5/6
+2060811 (m)IVm7b5/b7
+2060812 (m)IVm7b5/7
+1110601 (M)#VIm7/1
+1110603 (M)#VIm7/2
+1110604 (M)#VIm7/b3
+1110605 (M)#VIm7/3
+1110606 (M)#VIm7/4
+1110608 (M)#VIm7/5
+1110610 (M)#VIm7/6
+1110611 (M)#VIm7/b7
+1110612 (M)#VIm7/7
+1020501 (M)#Im6/1
+1020503 (M)#Im6/2
+1020504 (M)#Im6/b3
+1020505 (M)#Im6/3
+1020506 (M)#Im6/4
+1020508 (M)#Im6/5
+1020510 (M)#Im6/6
+1020511 (M)#Im6/b7
+1020512 (M)#Im6/7
+2060901 (m)IVdim/1
+2060903 (m)IVdim/2
+2060904 (m)IVdim/b3
+2060905 (m)IVdim/3
+2060906 (m)IVdim/4
+2060908 (m)IVdim/5
+2060910 (m)IVdim/6
+2060911 (m)IVdim/b7
+2060912 (m)IVdim/7
+2110101 (m)VII/1
+2110103 (m)VII/2
+2110104 (m)VII/b3
+2110105 (m)VII/3
+2110106 (m)VII/4
+2110108 (m)VII/5
+2110110 (m)VII/6
+2110111 (m)VII/b7
+2110112 (m)VII/7
+1110701 (M)#VImaj7/1
+1110703 (M)#VImaj7/2
+1110704 (M)#VImaj7/b3
+1110705 (M)#VImaj7/3
+1110706 (M)#VImaj7/4
+1110708 (M)#VImaj7/5
+1110710 (M)#VImaj7/6
+1110711 (M)#VImaj7/b7
+1110712 (M)#VImaj7/7
+1020601 (M)#Im7/1
+1020603 (M)#Im7/2
+1020604 (M)#Im7/b3
+1020605 (M)#Im7/3
+1020606 (M)#Im7/4
+1020608 (M)#Im7/5
+1020610 (M)#Im7/6
+1020611 (M)#Im7/b7
+1020612 (M)#Im7/7
+2061001 (m)IVdim7/1
+2061003 (m)IVdim7/2
+2061004 (m)IVdim7/b3
+2061005 (m)IVdim7/3
+2061006 (m)IVdim7/4
+2061008 (m)IVdim7/5
+2061010 (m)IVdim7/6
+2061011 (m)IVdim7/b7
+2061012 (m)IVdim7/7
+2110201 (m)VII6/1
+2110203 (m)VII6/2
+2110204 (m)VII6/b3
+2110205 (m)VII6/3
+2110206 (m)VII6/4
+2110208 (m)VII6/5
+2110210 (m)VII6/6
+2110211 (m)VII6/b7
+2110212 (m)VII6/7
+2020101 (m)#I/1
+2020103 (m)#I/2
+2020104 (m)#I/b3
+2020105 (m)#I/3
+2020106 (m)#I/4
+2020108 (m)#I/5
+2020110 (m)#I/6
+2020111 (m)#I/b7
+2020112 (m)#I/7
+1110801 (M)#VIm7b5/1
+1110803 (M)#VIm7b5/2
+1110804 (M)#VIm7b5/b3
+1110805 (M)#VIm7b5/3
+1110806 (M)#VIm7b5/4
+1110808 (M)#VIm7b5/5
+1110810 (M)#VIm7b5/6
+1110811 (M)#VIm7b5/b7
+1110812 (M)#VIm7b5/7
+1020701 (M)#Imaj7/1
+1020703 (M)#Imaj7/2
+1020704 (M)#Imaj7/b3
+1020705 (M)#Imaj7/3
+1020706 (M)#Imaj7/4
+1020708 (M)#Imaj7/5
+1020710 (M)#Imaj7/6
+1020711 (M)#Imaj7/b7
+1020712 (M)#Imaj7/7
+2061101 (m)IVaug/1
+2061103 (m)IVaug/2
+2061104 (m)IVaug/b3
+2061105 (m)IVaug/3
+2061106 (m)IVaug/4
+2061108 (m)IVaug/5
+2061110 (m)IVaug/6
+2061111 (m)IVaug/b7
+2061112 (m)IVaug/7
+2110301 (m)VII7/1
+2110303 (m)VII7/2
+2110304 (m)VII7/b3
+2110305 (m)VII7/3
+2110306 (m)VII7/4
+2110308 (m)VII7/5
+2110310 (m)VII7/6
+2110311 (m)VII7/b7
+2110312 (m)VII7/7
+2020201 (m)#I6/1
+2020203 (m)#I6/2
+2020204 (m)#I6/b3
+2020205 (m)#I6/3
+2020206 (m)#I6/4
+2020208 (m)#I6/5
+2020210 (m)#I6/6
+2020211 (m)#I6/b7
+2020212 (m)#I6/7
+1110901 (M)#VIdim/1
+1110903 (M)#VIdim/2
+1110904 (M)#VIdim/b3
+1110905 (M)#VIdim/3
+1110906 (M)#VIdim/4
+1110908 (M)#VIdim/5
+1110910 (M)#VIdim/6
+1110911 (M)#VIdim/b7
+1110912 (M)#VIdim/7
+1020801 (M)#Im7b5/1
+1020803 (M)#Im7b5/2
+1020804 (M)#Im7b5/b3
+1020805 (M)#Im7b5/3
+1020806 (M)#Im7b5/4
+1020808 (M)#Im7b5/5
+1020810 (M)#Im7b5/6
+1020811 (M)#Im7b5/b7
+1020812 (M)#Im7b5/7
+2110401 (m)VIIm/1
+2110403 (m)VIIm/2
+2110404 (m)VIIm/b3
+2110405 (m)VIIm/3
+2110406 (m)VIIm/4
+2110408 (m)VIIm/5
+2110410 (m)VIIm/6
+2110411 (m)VIIm/b7
+2110412 (m)VIIm/7
+2020301 (m)#I7/1
+2020303 (m)#I7/2
+2020304 (m)#I7/b3
+2020305 (m)#I7/3
+2020306 (m)#I7/4
+2020308 (m)#I7/5
+2020310 (m)#I7/6
+2020311 (m)#I7/b7
+2020312 (m)#I7/7
+1111001 (M)#VIdim7/1
+1111003 (M)#VIdim7/2
+1111004 (M)#VIdim7/b3
+1111005 (M)#VIdim7/3
+1111006 (M)#VIdim7/4
+1111008 (M)#VIdim7/5
+1111010 (M)#VIdim7/6
+1111011 (M)#VIdim7/b7
+1111012 (M)#VIdim7/7
+1020901 (M)#Idim/1
+1020903 (M)#Idim/2
+1020904 (M)#Idim/b3
+1020905 (M)#Idim/3
+1020906 (M)#Idim/4
+1020908 (M)#Idim/5
+1020910 (M)#Idim/6
+1020911 (M)#Idim/b7
+1020912 (M)#Idim/7
+1070101 (M)#IV/1
+1070103 (M)#IV/2
+1070104 (M)#IV/b3
+1070105 (M)#IV/3
+1070106 (M)#IV/4
+1070108 (M)#IV/5
+1070110 (M)#IV/6
+1070111 (M)#IV/b7
+1070112 (M)#IV/7
+2110501 (m)VIIm6/1
+2110503 (m)VIIm6/2
+2110504 (m)VIIm6/b3
+2110505 (m)VIIm6/3
+2110506 (m)VIIm6/4
+2110508 (m)VIIm6/5
+2110510 (m)VIIm6/6
+2110511 (m)VIIm6/b7
+2110512 (m)VIIm6/7
+2020401 (m)#Im/1
+2020403 (m)#Im/2
+2020404 (m)#Im/b3
+2020405 (m)#Im/3
+2020406 (m)#Im/4
+2020408 (m)#Im/5
+2020410 (m)#Im/6
+2020411 (m)#Im/b7
+2020412 (m)#Im/7
+1111101 (M)#VIaug/1
+1111103 (M)#VIaug/2
+1111104 (M)#VIaug/b3
+1111105 (M)#VIaug/3
+1111106 (M)#VIaug/4
+1111108 (M)#VIaug/5
+1111110 (M)#VIaug/6
+1111111 (M)#VIaug/b7
+1111112 (M)#VIaug/7
+1021001 (M)#Idim7/1
+1021003 (M)#Idim7/2
+1021004 (M)#Idim7/b3
+1021005 (M)#Idim7/3
+1021006 (M)#Idim7/4
+1021008 (M)#Idim7/5
+1021010 (M)#Idim7/6
+1021011 (M)#Idim7/b7
+1021012 (M)#Idim7/7
+1070201 (M)#IV6/1
+1070203 (M)#IV6/2
+1070204 (M)#IV6/b3
+1070205 (M)#IV6/3
+1070206 (M)#IV6/4
+1070208 (M)#IV6/5
+1070210 (M)#IV6/6
+1070211 (M)#IV6/b7
+1070212 (M)#IV6/7
+2110601 (m)VIIm7/1
+2110603 (m)VIIm7/2
+2110604 (m)VIIm7/b3
+2110605 (m)VIIm7/3
+2110606 (m)VIIm7/4
+2110608 (m)VIIm7/5
+2110610 (m)VIIm7/6
+2110611 (m)VIIm7/b7
+2110612 (m)VIIm7/7
+2020501 (m)#Im6/1
+2020503 (m)#Im6/2
+2020504 (m)#Im6/b3
+2020505 (m)#Im6/3
+2020506 (m)#Im6/4
+2020508 (m)#Im6/5
+2020510 (m)#Im6/6
+2020511 (m)#Im6/b7
+2020512 (m)#Im6/7
+1021101 (M)#Iaug/1
+1021103 (M)#Iaug/2
+1021104 (M)#Iaug/b3
+1021105 (M)#Iaug/3
+1021106 (M)#Iaug/4
+1021108 (M)#Iaug/5
+1021110 (M)#Iaug/6
+1021111 (M)#Iaug/b7
+1021112 (M)#Iaug/7
+1070301 (M)#IV7/1
+1070303 (M)#IV7/2
+1070304 (M)#IV7/b3
+1070305 (M)#IV7/3
+1070306 (M)#IV7/4
+1070308 (M)#IV7/5
+1070310 (M)#IV7/6
+1070311 (M)#IV7/b7
+1070312 (M)#IV7/7
+2110701 (m)VIImaj7/1
+2110703 (m)VIImaj7/2
+2110704 (m)VIImaj7/b3
+2110705 (m)VIImaj7/3
+2110706 (m)VIImaj7/4
+2110708 (m)VIImaj7/5
+2110710 (m)VIImaj7/6
+2110711 (m)VIImaj7/b7
+2110712 (m)VIImaj7/7
+2020601 (m)#Im7/1
+2020603 (m)#Im7/2
+2020604 (m)#Im7/b3
+2020605 (m)#Im7/3
+2020606 (m)#Im7/4
+2020608 (m)#Im7/5
+2020610 (m)#Im7/6
+2020611 (m)#Im7/b7
+2020612 (m)#Im7/7
+1070401 (M)#IVm/1
+1070403 (M)#IVm/2
+1070404 (M)#IVm/b3
+1070405 (M)#IVm/3
+1070406 (M)#IVm/4
+1070408 (M)#IVm/5
+1070410 (M)#IVm/6
+1070411 (M)#IVm/b7
+1070412 (M)#IVm/7
+2110801 (m)VIIm7b5/1
+2110803 (m)VIIm7b5/2
+2110804 (m)VIIm7b5/b3
+2110805 (m)VIIm7b5/3
+2110806 (m)VIIm7b5/4
+2110808 (m)VIIm7b5/5
+2110810 (m)VIIm7b5/6
+2110811 (m)VIIm7b5/b7
+2110812 (m)VIIm7b5/7
+2020701 (m)#Imaj7/1
+2020703 (m)#Imaj7/2
+2020704 (m)#Imaj7/b3
+2020705 (m)#Imaj7/3
+2020706 (m)#Imaj7/4
+2020708 (m)#Imaj7/5
+2020710 (m)#Imaj7/6
+2020711 (m)#Imaj7/b7
+2020712 (m)#Imaj7/7
+1070501 (M)#IVm6/1
+1070503 (M)#IVm6/2
+1070504 (M)#IVm6/b3
+1070505 (M)#IVm6/3
+1070506 (M)#IVm6/4
+1070508 (M)#IVm6/5
+1070510 (M)#IVm6/6
+1070511 (M)#IVm6/b7
+1070512 (M)#IVm6/7
+2110901 (m)VIIdim/1
+2110903 (m)VIIdim/2
+2110904 (m)VIIdim/b3
+2110905 (m)VIIdim/3
+2110906 (m)VIIdim/4
+2110908 (m)VIIdim/5
+2110910 (m)VIIdim/6
+2110911 (m)VIIdim/b7
+2110912 (m)VIIdim/7
+2020801 (m)#Im7b5/1
+2020803 (m)#Im7b5/2
+2020804 (m)#Im7b5/b3
+2020805 (m)#Im7b5/3
+2020806 (m)#Im7b5/4
+2020808 (m)#Im7b5/5
+2020810 (m)#Im7b5/6
+2020811 (m)#Im7b5/b7
+2020812 (m)#Im7b5/7
+1070601 (M)#IVm7/1
+1070603 (M)#IVm7/2
+1070604 (M)#IVm7/b3
+1070605 (M)#IVm7/3
+1070606 (M)#IVm7/4
+1070608 (M)#IVm7/5
+1070610 (M)#IVm7/6
+1070611 (M)#IVm7/b7
+1070612 (M)#IVm7/7
+2111001 (m)VIIdim7/1
+2111003 (m)VIIdim7/2
+2111004 (m)VIIdim7/b3
+2111005 (m)VIIdim7/3
+2111006 (m)VIIdim7/4
+2111008 (m)VIIdim7/5
+2111010 (m)VIIdim7/6
+2111011 (m)VIIdim7/b7
+2111012 (m)VIIdim7/7
+2020901 (m)#Idim/1
+2020903 (m)#Idim/2
+2020904 (m)#Idim/b3
+2020905 (m)#Idim/3
+2020906 (m)#Idim/4
+2020908 (m)#Idim/5
+2020910 (m)#Idim/6
+2020911 (m)#Idim/b7
+2020912 (m)#Idim/7
+2070101 (m)#IV/1
+2070103 (m)#IV/2
+2070104 (m)#IV/b3
+2070105 (m)#IV/3
+2070106 (m)#IV/4
+2070108 (m)#IV/5
+2070110 (m)#IV/6
+2070111 (m)#IV/b7
+2070112 (m)#IV/7
+1070701 (M)#IVmaj7/1
+1070703 (M)#IVmaj7/2
+1070704 (M)#IVmaj7/b3
+1070705 (M)#IVmaj7/3
+1070706 (M)#IVmaj7/4
+1070708 (M)#IVmaj7/5
+1070710 (M)#IVmaj7/6
+1070711 (M)#IVmaj7/b7
+1070712 (M)#IVmaj7/7
+2111101 (m)VIIaug/1
+2111103 (m)VIIaug/2
+2111104 (m)VIIaug/b3
+2111105 (m)VIIaug/3
+2111106 (m)VIIaug/4
+2111108 (m)VIIaug/5
+2111110 (m)VIIaug/6
+2111111 (m)VIIaug/b7
+2111112 (m)VIIaug/7
+2021001 (m)#Idim7/1
+2021003 (m)#Idim7/2
+2021004 (m)#Idim7/b3
+2021005 (m)#Idim7/3
+2021006 (m)#Idim7/4
+2021008 (m)#Idim7/5
+2021010 (m)#Idim7/6
+2021011 (m)#Idim7/b7
+2021012 (m)#Idim7/7
+2070201 (m)#IV6/1
+2070203 (m)#IV6/2
+2070204 (m)#IV6/b3
+2070205 (m)#IV6/3
+2070206 (m)#IV6/4
+2070208 (m)#IV6/5
+2070210 (m)#IV6/6
+2070211 (m)#IV6/b7
+2070212 (m)#IV6/7
+1070801 (M)#IVm7b5/1
+1070803 (M)#IVm7b5/2
+1070804 (M)#IVm7b5/b3
+1070805 (M)#IVm7b5/3
+1070806 (M)#IVm7b5/4
+1070808 (M)#IVm7b5/5
+1070810 (M)#IVm7b5/6
+1070811 (M)#IVm7b5/b7
+1070812 (M)#IVm7b5/7
+2021101 (m)#Iaug/1
+2021103 (m)#Iaug/2
+2021104 (m)#Iaug/b3
+2021105 (m)#Iaug/3
+2021106 (m)#Iaug/4
+2021108 (m)#Iaug/5
+2021110 (m)#Iaug/6
+2021111 (m)#Iaug/b7
+2021112 (m)#Iaug/7
+2070301 (m)#IV7/1
+2070303 (m)#IV7/2
+2070304 (m)#IV7/b3
+2070305 (m)#IV7/3
+2070306 (m)#IV7/4
+2070308 (m)#IV7/5
+2070310 (m)#IV7/6
+2070311 (m)#IV7/b7
+2070312 (m)#IV7/7
+1070901 (M)#IVdim/1
+1070903 (M)#IVdim/2
+1070904 (M)#IVdim/b3
+1070905 (M)#IVdim/3
+1070906 (M)#IVdim/4
+1070908 (M)#IVdim/5
+1070910 (M)#IVdim/6
+1070911 (M)#IVdim/b7
+1070912 (M)#IVdim/7
+1120101 (M)VII/1
+1120103 (M)VII/2
+1120104 (M)VII/b3
+1120105 (M)VII/3
+1120106 (M)VII/4
+1120108 (M)VII/5
+1120110 (M)VII/6
+1120111 (M)VII/b7
+1120112 (M)VII/7
+2070401 (m)#IVm/1
+2070403 (m)#IVm/2
+2070404 (m)#IVm/b3
+2070405 (m)#IVm/3
+2070406 (m)#IVm/4
+2070408 (m)#IVm/5
+2070410 (m)#IVm/6
+2070411 (m)#IVm/b7
+2070412 (m)#IVm/7
+1071001 (M)#IVdim7/1
+1071003 (M)#IVdim7/2
+1071004 (M)#IVdim7/b3
+1071005 (M)#IVdim7/3
+1071006 (M)#IVdim7/4
+1071008 (M)#IVdim7/5
+1071010 (M)#IVdim7/6
+1071011 (M)#IVdim7/b7
+1071012 (M)#IVdim7/7
+1120201 (M)VII6/1
+1120203 (M)VII6/2
+1120204 (M)VII6/b3
+1120205 (M)VII6/3
+1120206 (M)VII6/4
+1120208 (M)VII6/5
+1120210 (M)VII6/6
+1120211 (M)VII6/b7
+1120212 (M)VII6/7
+1030101 (M)II/1
+1030103 (M)II/2
+1030104 (M)II/b3
+1030105 (M)II/3
+1030106 (M)II/4
+1030108 (M)II/5
+1030110 (M)II/6
+1030111 (M)II/b7
+1030112 (M)II/7
+2070501 (m)#IVm6/1
+2070503 (m)#IVm6/2
+2070504 (m)#IVm6/b3
+2070505 (m)#IVm6/3
+2070506 (m)#IVm6/4
+2070508 (m)#IVm6/5
+2070510 (m)#IVm6/6
+2070511 (m)#IVm6/b7
+2070512 (m)#IVm6/7
+1071101 (M)#IVaug/1
+1071103 (M)#IVaug/2
+1071104 (M)#IVaug/b3
+1071105 (M)#IVaug/3
+1071106 (M)#IVaug/4
+1071108 (M)#IVaug/5
+1071110 (M)#IVaug/6
+1071111 (M)#IVaug/b7
+1071112 (M)#IVaug/7
+1120301 (M)VII7/1
+1120303 (M)VII7/2
+1120304 (M)VII7/b3
+1120305 (M)VII7/3
+1120306 (M)VII7/4
+1120308 (M)VII7/5
+1120310 (M)VII7/6
+1120311 (M)VII7/b7
+1120312 (M)VII7/7
+1030201 (M)II6/1
+1030203 (M)II6/2
+1030204 (M)II6/b3
+1030205 (M)II6/3
+1030206 (M)II6/4
+1030208 (M)II6/5
+1030210 (M)II6/6
+1030211 (M)II6/b7
+1030212 (M)II6/7
+2070601 (m)#IVm7/1
+2070603 (m)#IVm7/2
+2070604 (m)#IVm7/b3
+2070605 (m)#IVm7/3
+2070606 (m)#IVm7/4
+2070608 (m)#IVm7/5
+2070610 (m)#IVm7/6
+2070611 (m)#IVm7/b7
+2070612 (m)#IVm7/7
+1120401 (M)VIIm/1
+1120403 (M)VIIm/2
+1120404 (M)VIIm/b3
+1120405 (M)VIIm/3
+1120406 (M)VIIm/4
+1120408 (M)VIIm/5
+1120410 (M)VIIm/6
+1120411 (M)VIIm/b7
+1120412 (M)VIIm/7
+1030301 (M)II7/1
+1030303 (M)II7/2
+1030304 (M)II7/b3
+1030305 (M)II7/3
+1030306 (M)II7/4
+1030308 (M)II7/5
+1030310 (M)II7/6
+1030311 (M)II7/b7
+1030312 (M)II7/7
+2070701 (m)#IVmaj7/1
+2070703 (m)#IVmaj7/2
+2070704 (m)#IVmaj7/b3
+2070705 (m)#IVmaj7/3
+2070706 (m)#IVmaj7/4
+2070708 (m)#IVmaj7/5
+2070710 (m)#IVmaj7/6
+2070711 (m)#IVmaj7/b7
+2070712 (m)#IVmaj7/7
+1120501 (M)VIIm6/1
+1120503 (M)VIIm6/2
+1120504 (M)VIIm6/b3
+1120505 (M)VIIm6/3
+1120506 (M)VIIm6/4
+1120508 (M)VIIm6/5
+1120510 (M)VIIm6/6
+1120511 (M)VIIm6/b7
+1120512 (M)VIIm6/7
+1030401 (M)IIm/1
+1030403 (M)IIm/2
+1030404 (M)IIm/b3
+1030405 (M)IIm/3
+1030406 (M)IIm/4
+1030408 (M)IIm/5
+1030410 (M)IIm/6
+1030411 (M)IIm/b7
+1030412 (M)IIm/7
+2070801 (m)#IVm7b5/1
+2070803 (m)#IVm7b5/2
+2070804 (m)#IVm7b5/b3
+2070805 (m)#IVm7b5/3
+2070806 (m)#IVm7b5/4
+2070808 (m)#IVm7b5/5
+2070810 (m)#IVm7b5/6
+2070811 (m)#IVm7b5/b7
+2070812 (m)#IVm7b5/7
+1120601 (M)VIIm7/1
+1120603 (M)VIIm7/2
+1120604 (M)VIIm7/b3
+1120605 (M)VIIm7/3
+1120606 (M)VIIm7/4
+1120608 (M)VIIm7/5
+1120610 (M)VIIm7/6
+1120611 (M)VIIm7/b7
+1120612 (M)VIIm7/7
+1030501 (M)IIm6/1
+1030503 (M)IIm6/2
+1030504 (M)IIm6/b3
+1030505 (M)IIm6/3
+1030506 (M)IIm6/4
+1030508 (M)IIm6/5
+1030510 (M)IIm6/6
+1030511 (M)IIm6/b7
+1030512 (M)IIm6/7
+2070901 (m)#IVdim/1
+2070903 (m)#IVdim/2
+2070904 (m)#IVdim/b3
+2070905 (m)#IVdim/3
+2070906 (m)#IVdim/4
+2070908 (m)#IVdim/5
+2070910 (m)#IVdim/6
+2070911 (m)#IVdim/b7
+2070912 (m)#IVdim/7
+2120101 (m)#VII/1
+2120103 (m)#VII/2
+2120104 (m)#VII/b3
+2120105 (m)#VII/3
+2120106 (m)#VII/4
+2120108 (m)#VII/5
+2120110 (m)#VII/6
+2120111 (m)#VII/b7
+2120112 (m)#VII/7
+1120701 (M)VIImaj7/1
+1120703 (M)VIImaj7/2
+1120704 (M)VIImaj7/b3
+1120705 (M)VIImaj7/3
+1120706 (M)VIImaj7/4
+1120708 (M)VIImaj7/5
+1120710 (M)VIImaj7/6
+1120711 (M)VIImaj7/b7
+1120712 (M)VIImaj7/7
+1030601 (M)IIm7/1
+1030603 (M)IIm7/2
+1030604 (M)IIm7/b3
+1030605 (M)IIm7/3
+1030606 (M)IIm7/4
+1030608 (M)IIm7/5
+1030610 (M)IIm7/6
+1030611 (M)IIm7/b7
+1030612 (M)IIm7/7
+2071001 (m)#IVdim7/1
+2071003 (m)#IVdim7/2
+2071004 (m)#IVdim7/b3
+2071005 (m)#IVdim7/3
+2071006 (m)#IVdim7/4
+2071008 (m)#IVdim7/5
+2071010 (m)#IVdim7/6
+2071011 (m)#IVdim7/b7
+2071012 (m)#IVdim7/7
+2120201 (m)#VII6/1
+2120203 (m)#VII6/2
+2120204 (m)#VII6/b3
+2120205 (m)#VII6/3
+2120206 (m)#VII6/4
+2120208 (m)#VII6/5
+2120210 (m)#VII6/6
+2120211 (m)#VII6/b7
+2120212 (m)#VII6/7
+2030101 (m)II/1
+2030103 (m)II/2
+2030104 (m)II/b3
+2030105 (m)II/3
+2030106 (m)II/4
+2030108 (m)II/5
+2030110 (m)II/6
+2030111 (m)II/b7
+2030112 (m)II/7
+1120801 (M)VIIm7b5/1
+1120803 (M)VIIm7b5/2
+1120804 (M)VIIm7b5/b3
+1120805 (M)VIIm7b5/3
+1120806 (M)VIIm7b5/4
+1120808 (M)VIIm7b5/5
+1120810 (M)VIIm7b5/6
+1120811 (M)VIIm7b5/b7
+1120812 (M)VIIm7b5/7
+1030701 (M)IImaj7/1
+1030703 (M)IImaj7/2
+1030704 (M)IImaj7/b3
+1030705 (M)IImaj7/3
+1030706 (M)IImaj7/4
+1030708 (M)IImaj7/5
+1030710 (M)IImaj7/6
+1030711 (M)IImaj7/b7
+1030712 (M)IImaj7/7
+2071101 (m)#IVaug/1
+2071103 (m)#IVaug/2
+2071104 (m)#IVaug/b3
+2071105 (m)#IVaug/3
+2071106 (m)#IVaug/4
+2071108 (m)#IVaug/5
+2071110 (m)#IVaug/6
+2071111 (m)#IVaug/b7
+2071112 (m)#IVaug/7
+2120301 (m)#VII7/1
+2120303 (m)#VII7/2
+2120304 (m)#VII7/b3
+2120305 (m)#VII7/3
+2120306 (m)#VII7/4
+2120308 (m)#VII7/5
+2120310 (m)#VII7/6
+2120311 (m)#VII7/b7
+2120312 (m)#VII7/7
+2030201 (m)II6/1
+2030203 (m)II6/2
+2030204 (m)II6/b3
+2030205 (m)II6/3
+2030206 (m)II6/4
+2030208 (m)II6/5
+2030210 (m)II6/6
+2030211 (m)II6/b7
+2030212 (m)II6/7
+1120901 (M)VIIdim/1
+1120903 (M)VIIdim/2
+1120904 (M)VIIdim/b3
+1120905 (M)VIIdim/3
+1120906 (M)VIIdim/4
+1120908 (M)VIIdim/5
+1120910 (M)VIIdim/6
+1120911 (M)VIIdim/b7
+1120912 (M)VIIdim/7
+1030801 (M)IIm7b5/1
+1030803 (M)IIm7b5/2
+1030804 (M)IIm7b5/b3
+1030805 (M)IIm7b5/3
+1030806 (M)IIm7b5/4
+1030808 (M)IIm7b5/5
+1030810 (M)IIm7b5/6
+1030811 (M)IIm7b5/b7
+1030812 (M)IIm7b5/7
+2120401 (m)#VIIm/1
+2120403 (m)#VIIm/2
+2120404 (m)#VIIm/b3
+2120405 (m)#VIIm/3
+2120406 (m)#VIIm/4
+2120408 (m)#VIIm/5
+2120410 (m)#VIIm/6
+2120411 (m)#VIIm/b7
+2120412 (m)#VIIm/7
+2030301 (m)II7/1
+2030303 (m)II7/2
+2030304 (m)II7/b3
+2030305 (m)II7/3
+2030306 (m)II7/4
+2030308 (m)II7/5
+2030310 (m)II7/6
+2030311 (m)II7/b7
+2030312 (m)II7/7
+1121001 (M)VIIdim7/1
+1121003 (M)VIIdim7/2
+1121004 (M)VIIdim7/b3
+1121005 (M)VIIdim7/3
+1121006 (M)VIIdim7/4
+1121008 (M)VIIdim7/5
+1121010 (M)VIIdim7/6
+1121011 (M)VIIdim7/b7
+1121012 (M)VIIdim7/7
+1030901 (M)IIdim/1
+1030903 (M)IIdim/2
+1030904 (M)IIdim/b3
+1030905 (M)IIdim/3
+1030906 (M)IIdim/4
+1030908 (M)IIdim/5
+1030910 (M)IIdim/6
+1030911 (M)IIdim/b7
+1030912 (M)IIdim/7
+1080101 (M)V/1
+1080103 (M)V/2
+1080104 (M)V/b3
+1080105 (M)V/3
+1080106 (M)V/4
+1080108 (M)V/5
+1080110 (M)V/6
+1080111 (M)V/b7
+1080112 (M)V/7
+2120501 (m)#VIIm6/1
+2120503 (m)#VIIm6/2
+2120504 (m)#VIIm6/b3
+2120505 (m)#VIIm6/3
+2120506 (m)#VIIm6/4
+2120508 (m)#VIIm6/5
+2120510 (m)#VIIm6/6
+2120511 (m)#VIIm6/b7
+2120512 (m)#VIIm6/7
+2030401 (m)IIm/1
+2030403 (m)IIm/2
+2030404 (m)IIm/b3
+2030405 (m)IIm/3
+2030406 (m)IIm/4
+2030408 (m)IIm/5
+2030410 (m)IIm/6
+2030411 (m)IIm/b7
+2030412 (m)IIm/7
+1121101 (M)VIIaug/1
+1121103 (M)VIIaug/2
+1121104 (M)VIIaug/b3
+1121105 (M)VIIaug/3
+1121106 (M)VIIaug/4
+1121108 (M)VIIaug/5
+1121110 (M)VIIaug/6
+1121111 (M)VIIaug/b7
+1121112 (M)VIIaug/7
+1031001 (M)IIdim7/1
+1031003 (M)IIdim7/2
+1031004 (M)IIdim7/b3
+1031005 (M)IIdim7/3
+1031006 (M)IIdim7/4
+1031008 (M)IIdim7/5
+1031010 (M)IIdim7/6
+1031011 (M)IIdim7/b7
+1031012 (M)IIdim7/7
+1080201 (M)V6/1
+1080203 (M)V6/2
+1080204 (M)V6/b3
+1080205 (M)V6/3
+1080206 (M)V6/4
+1080208 (M)V6/5
+1080210 (M)V6/6
+1080211 (M)V6/b7
+1080212 (M)V6/7
+2120601 (m)#VIIm7/1
+2120603 (m)#VIIm7/2
+2120604 (m)#VIIm7/b3
+2120605 (m)#VIIm7/3
+2120606 (m)#VIIm7/4
+2120608 (m)#VIIm7/5
+2120610 (m)#VIIm7/6
+2120611 (m)#VIIm7/b7
+2120612 (m)#VIIm7/7
+2030501 (m)IIm6/1
+2030503 (m)IIm6/2
+2030504 (m)IIm6/b3
+2030505 (m)IIm6/3
+2030506 (m)IIm6/4
+2030508 (m)IIm6/5
+2030510 (m)IIm6/6
+2030511 (m)IIm6/b7
+2030512 (m)IIm6/7
+1031101 (M)IIaug/1
+1031103 (M)IIaug/2
+1031104 (M)IIaug/b3
+1031105 (M)IIaug/3
+1031106 (M)IIaug/4
+1031108 (M)IIaug/5
+1031110 (M)IIaug/6
+1031111 (M)IIaug/b7
+1031112 (M)IIaug/7
+1080301 (M)V7/1
+1080303 (M)V7/2
+1080304 (M)V7/b3
+1080305 (M)V7/3
+1080306 (M)V7/4
+1080308 (M)V7/5
+1080310 (M)V7/6
+1080311 (M)V7/b7
+1080312 (M)V7/7
+2120701 (m)#VIImaj7/1
+2120703 (m)#VIImaj7/2
+2120704 (m)#VIImaj7/b3
+2120705 (m)#VIImaj7/3
+2120706 (m)#VIImaj7/4
+2120708 (m)#VIImaj7/5
+2120710 (m)#VIImaj7/6
+2120711 (m)#VIImaj7/b7
+2120712 (m)#VIImaj7/7
+2030601 (m)IIm7/1
+2030603 (m)IIm7/2
+2030604 (m)IIm7/b3
+2030605 (m)IIm7/3
+2030606 (m)IIm7/4
+2030608 (m)IIm7/5
+2030610 (m)IIm7/6
+2030611 (m)IIm7/b7
+2030612 (m)IIm7/7
+1080401 (M)Vm/1
+1080403 (M)Vm/2
+1080404 (M)Vm/b3
+1080405 (M)Vm/3
+1080406 (M)Vm/4
+1080408 (M)Vm/5
+1080410 (M)Vm/6
+1080411 (M)Vm/b7
+1080412 (M)Vm/7
+2120801 (m)#VIIm7b5/1
+2120803 (m)#VIIm7b5/2
+2120804 (m)#VIIm7b5/b3
+2120805 (m)#VIIm7b5/3
+2120806 (m)#VIIm7b5/4
+2120808 (m)#VIIm7b5/5
+2120810 (m)#VIIm7b5/6
+2120811 (m)#VIIm7b5/b7
+2120812 (m)#VIIm7b5/7
+2030701 (m)IImaj7/1
+2030703 (m)IImaj7/2
+2030704 (m)IImaj7/b3
+2030705 (m)IImaj7/3
+2030706 (m)IImaj7/4
+2030708 (m)IImaj7/5
+2030710 (m)IImaj7/6
+2030711 (m)IImaj7/b7
+2030712 (m)IImaj7/7
+1080501 (M)Vm6/1
+1080503 (M)Vm6/2
+1080504 (M)Vm6/b3
+1080505 (M)Vm6/3
+1080506 (M)Vm6/4
+1080508 (M)Vm6/5
+1080510 (M)Vm6/6
+1080511 (M)Vm6/b7
+1080512 (M)Vm6/7
+2120901 (m)#VIIdim/1
+2120903 (m)#VIIdim/2
+2120904 (m)#VIIdim/b3
+2120905 (m)#VIIdim/3
+2120906 (m)#VIIdim/4
+2120908 (m)#VIIdim/5
+2120910 (m)#VIIdim/6
+2120911 (m)#VIIdim/b7
+2120912 (m)#VIIdim/7
+2030801 (m)IIm7b5/1
+2030803 (m)IIm7b5/2
+2030804 (m)IIm7b5/b3
+2030805 (m)IIm7b5/3
+2030806 (m)IIm7b5/4
+2030808 (m)IIm7b5/5
+2030810 (m)IIm7b5/6
+2030811 (m)IIm7b5/b7
+2030812 (m)IIm7b5/7
+1080601 (M)Vm7/1
+1080603 (M)Vm7/2
+1080604 (M)Vm7/b3
+1080605 (M)Vm7/3
+1080606 (M)Vm7/4
+1080608 (M)Vm7/5
+1080610 (M)Vm7/6
+1080611 (M)Vm7/b7
+1080612 (M)Vm7/7
+2121001 (m)#VIIdim7/1
+2121003 (m)#VIIdim7/2
+2121004 (m)#VIIdim7/b3
+2121005 (m)#VIIdim7/3
+2121006 (m)#VIIdim7/4
+2121008 (m)#VIIdim7/5
+2121010 (m)#VIIdim7/6
+2121011 (m)#VIIdim7/b7
+2121012 (m)#VIIdim7/7
+2030901 (m)IIdim/1
+2030903 (m)IIdim/2
+2030904 (m)IIdim/b3
+2030905 (m)IIdim/3
+2030906 (m)IIdim/4
+2030908 (m)IIdim/5
+2030910 (m)IIdim/6
+2030911 (m)IIdim/b7
+2030912 (m)IIdim/7
+2080101 (m)V/1
+2080103 (m)V/2
+2080104 (m)V/b3
+2080105 (m)V/3
+2080106 (m)V/4
+2080108 (m)V/5
+2080110 (m)V/6
+2080111 (m)V/b7
+2080112 (m)V/7
+1080701 (M)Vmaj7/1
+1080703 (M)Vmaj7/2
+1080704 (M)Vmaj7/b3
+1080705 (M)Vmaj7/3
+1080706 (M)Vmaj7/4
+1080708 (M)Vmaj7/5
+1080710 (M)Vmaj7/6
+1080711 (M)Vmaj7/b7
+1080712 (M)Vmaj7/7
+2121101 (m)#VIIaug/1
+2121103 (m)#VIIaug/2
+2121104 (m)#VIIaug/b3
+2121105 (m)#VIIaug/3
+2121106 (m)#VIIaug/4
+2121108 (m)#VIIaug/5
+2121110 (m)#VIIaug/6
+2121111 (m)#VIIaug/b7
+2121112 (m)#VIIaug/7
+2031001 (m)IIdim7/1
+2031003 (m)IIdim7/2
+2031004 (m)IIdim7/b3
+2031005 (m)IIdim7/3
+2031006 (m)IIdim7/4
+2031008 (m)IIdim7/5
+2031010 (m)IIdim7/6
+2031011 (m)IIdim7/b7
+2031012 (m)IIdim7/7
+2080201 (m)V6/1
+2080203 (m)V6/2
+2080204 (m)V6/b3
+2080205 (m)V6/3
+2080206 (m)V6/4
+2080208 (m)V6/5
+2080210 (m)V6/6
+2080211 (m)V6/b7
+2080212 (m)V6/7
+1080801 (M)Vm7b5/1
+1080803 (M)Vm7b5/2
+1080804 (M)Vm7b5/b3
+1080805 (M)Vm7b5/3
+1080806 (M)Vm7b5/4
+1080808 (M)Vm7b5/5
+1080810 (M)Vm7b5/6
+1080811 (M)Vm7b5/b7
+1080812 (M)Vm7b5/7
+2031101 (m)IIaug/1
+2031103 (m)IIaug/2
+2031104 (m)IIaug/b3
+2031105 (m)IIaug/3
+2031106 (m)IIaug/4
+2031108 (m)IIaug/5
+2031110 (m)IIaug/6
+2031111 (m)IIaug/b7
+2031112 (m)IIaug/7
+2080301 (m)V7/1
+2080303 (m)V7/2
+2080304 (m)V7/b3
+2080305 (m)V7/3
+2080306 (m)V7/4
+2080308 (m)V7/5
+2080310 (m)V7/6
+2080311 (m)V7/b7
+2080312 (m)V7/7
+1080901 (M)Vdim/1
+1080903 (M)Vdim/2
+1080904 (M)Vdim/b3
+1080905 (M)Vdim/3
+1080906 (M)Vdim/4
+1080908 (M)Vdim/5
+1080910 (M)Vdim/6
+1080911 (M)Vdim/b7
+1080912 (M)Vdim/7
+2080401 (m)Vm/1
+2080403 (m)Vm/2
+2080404 (m)Vm/b3
+2080405 (m)Vm/3
+2080406 (m)Vm/4
+2080408 (m)Vm/5
+2080410 (m)Vm/6
+2080411 (m)Vm/b7
+2080412 (m)Vm/7
+1081001 (M)Vdim7/1
+1081003 (M)Vdim7/2
+1081004 (M)Vdim7/b3
+1081005 (M)Vdim7/3
+1081006 (M)Vdim7/4
+1081008 (M)Vdim7/5
+1081010 (M)Vdim7/6
+1081011 (M)Vdim7/b7
+1081012 (M)Vdim7/7
+1040101 (M)#II/1
+1040103 (M)#II/2
+1040104 (M)#II/b3
+1040105 (M)#II/3
+1040106 (M)#II/4
+1040108 (M)#II/5
+1040110 (M)#II/6
+1040111 (M)#II/b7
+1040112 (M)#II/7
+2080501 (m)Vm6/1
+2080503 (m)Vm6/2
+2080504 (m)Vm6/b3
+2080505 (m)Vm6/3
+2080506 (m)Vm6/4
+2080508 (m)Vm6/5
+2080510 (m)Vm6/6
+2080511 (m)Vm6/b7
+2080512 (m)Vm6/7
+1081101 (M)Vaug/1
+1081103 (M)Vaug/2
+1081104 (M)Vaug/b3
+1081105 (M)Vaug/3
+1081106 (M)Vaug/4
+1081108 (M)Vaug/5
+1081110 (M)Vaug/6
+1081111 (M)Vaug/b7
+1081112 (M)Vaug/7
+1040201 (M)#II6/1
+1040203 (M)#II6/2
+1040204 (M)#II6/b3
+1040205 (M)#II6/3
+1040206 (M)#II6/4
+1040208 (M)#II6/5
+1040210 (M)#II6/6
+1040211 (M)#II6/b7
+1040212 (M)#II6/7
+2080601 (m)Vm7/1
+2080603 (m)Vm7/2
+2080604 (m)Vm7/b3
+2080605 (m)Vm7/3
+2080606 (m)Vm7/4
+2080608 (m)Vm7/5
+2080610 (m)Vm7/6
+2080611 (m)Vm7/b7
+2080612 (m)Vm7/7
+1040301 (M)#II7/1
+1040303 (M)#II7/2
+1040304 (M)#II7/b3
+1040305 (M)#II7/3
+1040306 (M)#II7/4
+1040308 (M)#II7/5
+1040310 (M)#II7/6
+1040311 (M)#II7/b7
+1040312 (M)#II7/7
+2080701 (m)Vmaj7/1
+2080703 (m)Vmaj7/2
+2080704 (m)Vmaj7/b3
+2080705 (m)Vmaj7/3
+2080706 (m)Vmaj7/4
+2080708 (m)Vmaj7/5
+2080710 (m)Vmaj7/6
+2080711 (m)Vmaj7/b7
+2080712 (m)Vmaj7/7
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/chord_sequence_mining/spmf.jar
Binary file collection_analysis/chord_sequence_mining/spmf.jar has changed
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/chord_sequence_mining/spmf.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/chord_sequence_mining/spmf.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,220 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+#
+# This is a data conversion wrapper for the spmf toolkit.
+# The toolkit has been released under GPL3 at www.philippe-fournier-viger.com/spmf
+
+__author__="Daniel Wolff"
+
+import chord2function as c2f
+import csv
+import re
+
+# takes a dictionary of chords for one or multiple files
+# in the form of dict[clipid] = [ (time,key,mode,fun,typ,bfun) ]
+# and converts it into spmf
+def folder2spmf(folderin = 'D:/mirg/Chord_Analysis20141216/', fileout = 'D:/mirg/Chord_Analysis20141216/Beethoven.spmf'):
+
+ # get chords for all files
+ output = c2f.folder2functions(folderin)
+
+ # open log
+ logfile = fileout + '.dic'
+ csvfile = open(logfile, "w+b") #opens the file for updating
+ w = csv.writer(csvfile)
+ w.writerow(["track","key","mode","sequence length"])
+
+ # open spmf file
+ fspmf = open(fileout,'w')
+ # ---
+ # this is writing the spmf format
+ for track,trackdata in output.iteritems():
+ # write chord sequence as one line in spmf file
+ for (time,key,mode,fun,typ,bfun) in trackdata:
+ chord = c2f.fun2num(fun,typ,bfun,mode)
+
+ # -1 is the spearator of items or itemsets
+ fspmf.write(str(chord) + ' -1 ')
+
+ # the sequence is closed with -2
+ fspmf.write('-2\n')
+ w.writerow([track, str(key), str(mode),str(len(trackdata))])
+
+ fspmf.close()
+ csvfile.close()
+
+# read an spmf file
+# def parsespmf(filein = 'D:/mirg/Chord_Analysis20141216/Beethoven.txt'):
+
+# string sourcefile path to the source spmf file with chords from records
+# string patternfile path to the pattern spmf file
+# matches each of the patterns in patternfile
+# to the chord sequences in sourcefile
+def match(sourcefile = 'D:/mirg/Chord_Analysis20141216/Beethoven.spmf',sourcedict = 'D:/mirg/Chord_Analysis20141216/Beethoven.spmf.dic', patternfile = 'D:/mirg/Chord_Analysis20141216/Beethoven_70.txt'):
+
+ # define regular expressions for matching
+ # closed sequence
+
+ # ---
+ # we here assume that there are more files than patterns,
+ # as display of patterns is somehow limited
+ # therefore parallelisation will be 1 pattern/multiple files
+ # per instance
+ # ---
+
+ patterns = []
+ patterns_raw = []
+ # read all patterns
+ f = open(patternfile, 'r')
+ for line in f:
+ # a line looks like this:
+ # 1120401 -1 1120101 -1 #SUP: 916
+
+
+ # save pattern
+ #patterns.append(pattern)
+ #numeric? or just regex?
+ # we'll use string, so any representation works
+
+ pattern,support = readPattern(line)
+ patterns.append(pattern)
+
+ # here's the regex
+ # first the spacer
+ #spacer = '((\s-1\s)|((\s-1\s)*[0-9]+\s-1\s)+)'
+ #repattern = r'(' + spacer + '*' + spacer.join(pattern) + spacer + '*' + '.*)'
+ #print repattern
+ #patterns.append(re.compile(repattern))
+
+ # ---
+ # now for the input sequences
+ # ---
+ # first: read track dictionary and get the input sequence names
+ tracks = getClipDict(sourcedict)
+
+ # read the input sequences
+ source = open(sourcefile, 'r')
+ patterns_tracks = dict()
+ tracks_patterns = dict()
+
+ # iterate over all tracks - to be parallelised
+ for track,count in tracks.iteritems():
+ sequence = readSequence(next(source))
+ print track
+ for p in range(0,len(patterns)):
+ # match open or closed pattern
+ if openPatternInSequence(sequence,patterns[p]):
+ if patterns_tracks.has_key(p):
+ patterns_tracks[p].append(track)
+ else:
+ patterns_tracks[p] = [track]
+
+ if tracks_patterns.has_key(track):
+ tracks_patterns[track].append(p)
+ else:
+ tracks_patterns[track] = [p]
+
+ # write clip index to files
+ writeAllPatternsForClips('D:/mirg/Chord_Analysis20141216/',tracks_patterns)
+ #print patterns_tracks[p]
+
+# writes results to disk per key
+def writeAllPatternsForClips(path = 'D:/mirg/Chord_Analysis20141216/',tracks_patterns = dict()):
+
+ for name, contents in tracks_patterns.iteritems():
+ # create new file
+ csvfile = open(path + '/' + name + '_patterns.csv', "w+b") #opens the file for updating
+ w = csv.writer(csvfile)
+
+ # compress pattern data ?
+ # e.g. 2 columns from-to for the long series of atomic increments
+
+ w.writerow(contents)
+ csvfile.close()
+
+
+# @param line: reads a line in the spmf output file with frequent patterns
+# returns list of strings "pattern" and int "support"
+def readPattern(line):
+ # locate support
+ suploc = line.find('#SUP:')
+ support = int(line[suploc+5:-1])
+
+ # extract pattern
+ pattern = line[:suploc].split(' -1 ')[:-1]
+ return (pattern,support)
+
+# @param line: reads a line in the spmf input file with chord sequence
+# returns list of strings "pattern" and int "support"
+def readSequence(line):
+ # locate support
+ suploc = line.find('-2')
+
+ # extract pattern
+ sequence = line[:suploc].split(' -1 ')[:-1]
+ return sequence
+
+# finds open pattern in sequences
+# @param [string] sequence input sequence
+# @param [string] pattern pattern to be found
+def openPatternInSequence(sequence,pattern):
+ patidx = 0
+ for item in sequence:
+ if item == pattern[patidx]:
+ patidx +=1
+
+ # did we complet the pattern?
+ if patidx >= (len(pattern)-1):
+ # could also return the start index
+ return 1
+ # finished the sequence before finishing pattern
+ return 0
+
+# finds closed pattern in sequences
+# @param [string] sequence input sequence
+# @param [string] pattern pattern to be found
+def closedPatternInSequence(sequence,pattern):
+ # alternatively use KnuthMorrisPratt with unsplit string
+ return ''.join(map(str, pattern)) in ''.join(map(str, sequence))
+
+# reads all track names from the dictionary created by folder2spmf
+# @param sourcedict path to dictionary
+def getClipDict(sourcedict):
+
+ f = open(sourcedict, 'rt')
+ reader = csv.reader(f)
+
+ # skip first roow that contains legend
+ next(reader)
+
+ # get following rows
+ tracks = dict()
+ for (track,key,mode,seqlen) in reader:
+ tracks[track]= (key,mode,seqlen)
+ #tracks.append((track,count))
+
+ f.close()
+ return tracks
+
+
+# run spmf afterwards with java -jar spmf.jar run CM-SPADE Beethoven.spmf output.txt 50% 3
+if __name__ == "__main__":
+ #folder2spmf()
+ match()
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/tools/csv2json.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/tools/csv2json.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,101 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+# this file converts data (csv) to the final web api json format.
+
+__author__="Daniel Wolff"
+__date__ ="$11-Feb-2015 16:32:09$"
+
+import sys
+import csv
+import json
+
+
+# global data format version
+dv=str(1)
+
+
+def getIdentifiers():
+ # collect collection and perspective id
+ cid = raw_input("Please Input Collection Identifier:")
+ pid = raw_input("Please Input Perspective / Analyisis Identifier:")
+
+ # additional parameters
+ params = raw_input("Input further parameters as string 'parameter1=value1_parameter2=value2...'")
+ return (cid,pid,params)
+
+def makeUrl(cid,pid,params):
+ #"getCollectionPerspective_pid=chordSequences_cid=jazz_limit=100.json"
+ s = "_"
+ if not params == '':
+ url = s.join(['getCollectionPerspective','pid='+ pid,'cid=' + cid,params,"dv="+dv])
+ else:
+ url = s.join(['getCollectionPerspective','pid='+ pid,'cid=' + cid,'dv='+dv])
+
+ return url
+
+
+# reads in any csv and returns a list of structure
+# time(float), data1, data2 ....data2
+def read_csv(filein = ''):
+ output = []
+ with open(filein, 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ output.append(row)
+ return output
+
+# write data into object formatted and create json output
+def data2apijson(data,cid,pid,params):
+ url = makeUrl(cid,pid,params)
+ if not type(data)=='dict':
+ obj = {"query":url, "result": {pid:data}}
+ else:
+ obj = {"query":url, "result": data}
+ return (json.dumps(obj), url)
+
+# writes json to file
+def writejsonfile(jstring,url):
+ f = open(url + ".json", "w");
+ f.write(jstring)
+ f.close
+
+def data2json(data):
+ # get further needed data such as url
+ (cid,pid,params) = getIdentifiers()
+ (jsondat,url) = data2apijson(data,cid,pid,params)
+
+ # write to file
+ writejsonfile(jsondat,url)
+
+#call this with csv input
+if __name__ == "__main__":
+ # load data in file
+ try:
+ if len(sys.argv) < 2:
+ csvfile = 'test.csv'
+ else:
+ csvfile = sys.argv[1]
+ data = read_csv(csvfile)
+ except:
+ print "Please provide input csv file as command-line argument"
+ quit()
+
+ data2json(data)
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/tools/test.csv
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/tools/test.csv Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,3 @@
+1,2,3
+2,A#,5
+4,A,4
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/tools/vampstats.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/tools/vampstats.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,166 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+# creates a histogram from given input files or folder
+
+__author__="Daniel Wolff"
+__date__ ="$11-Feb-2015 18:18:47$"
+
+import sys
+import os
+import csv
+import numpy
+import csv2json as c2j
+import re
+
+
+# global feature extensions
+#ext = tuple([".n3",".csv",".mid"])
+ext = tuple([".csv"])
+
+floater = re.compile("((\d+)(.\d+)*)")
+# reads in any csv and returns a list of structure
+# time(float), data1, data2 ....data2
+def read_vamp_csv(filein = '', datapos = 0):
+ output = []
+ badcount = 0
+ with open(filein, 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ if len(row) >= datapos + 2:
+ output.append([float(row[0])] + row[1:])
+ else:
+ badcount += 1
+ print "Ignored " + str(badcount) + " short rows"
+ return output
+
+#calculates the histogram
+def histogram(data, datapos = 1, nbins = -1):
+
+ # symbols or numerical input?
+ if not nbins == -1:
+
+ #convert to numpy data
+ ddata = string2numpy(data,datapos)
+
+ count,index = numpy.histogram(ddata,nbins-1)
+ count = count.tolist()
+ index = index.tolist()
+
+ # here for strings
+ else:
+ # build histogram on strings
+ histo = dict()
+ for row in data:
+ histo[row[datapos+1]] = histo.get(row[datapos+1], 0) + 1
+ index = histo.keys()
+ count = histo.values()
+
+ # return histogram
+ return {"count":count, "index":index}
+
+#calculates statistics for numerical input
+def numstats(data,datapos):
+
+ #convert to numpy data
+ ddata = string2numpy(data,datapos)
+
+ avg = numpy.average(ddata).tolist()
+ med = numpy.median(ddata).tolist()
+ std = numpy.std(ddata).tolist()
+
+ # return data
+ return {"average": avg, "median": med, "std": std}
+
+def featurefilesinpath(path):
+ # ---
+ # we traverse the file structure
+ # and list files to copy
+ # ---
+ files = []
+ for (dirpath, dirnames, filenames) in os.walk(path):
+ for file in filenames:
+ # we copy all requested files and the transform files as well!
+ if (file.endswith(ext)):
+ source = os.path.join(dirpath, file).replace('\\','/')
+ files.append(source)
+ return files
+
+# convert to numpy
+def string2numpy(data,datapos):
+ try:
+ ddata = numpy.array(data, dtype=float)[:, datapos+1]
+ except:
+ edata = []
+ for row in data:
+ # account for verbatim units
+ m = re.search("[a-zA-Z]", row[datapos+1])
+ if m is not None:
+ # take only the specified column datapos+1
+ edata.append(row[datapos+1][:(m.start()-1)])
+ else:
+ # take only the specified column datapos+1
+ edata.append(row[datapos+1])
+ ddata = numpy.array(edata,dtype=float)
+ return ddata
+
+# main entry point
+if __name__ == "__main__":
+ print "Usage: vampstats datapos nbins file1/dir1 file2/dir2 ...."
+ print "datapos: column of data after timecode to process"
+ print "nbins: -1 for categorical data, otherwise number of bins for histogram"
+
+ datapos = int(sys.argv[1])
+ nbins = int(sys.argv[2])
+
+ # check and collate files
+ files = []
+ for path in sys.argv[3:]:
+ if os.path.isdir(path):
+ files.extend(featurefilesinpath(path))
+ else:
+ if os.path.isfile(path):
+ files.extend(path)
+ print "Number of files now loading: " + str(len(files))
+
+ # we collate all data first and then count.
+ # @todo: read all files and create dictionary first for large tasks
+ data = []
+ for file in files:
+ print file
+ data.extend(read_vamp_csv(file, datapos))
+
+ print "Total data size in memory: " + str(sys.getsizeof(data))
+
+ # now get the histogram for all data
+ histo = histogram(data,datapos,nbins)
+ print histo
+ print "Please input a description for the histogram analysis features"
+ c2j.data2json(histo)
+
+ # further numerical analysis if this is not categorical data
+ if not nbins == -1:
+ ns = numstats(data,datapos)
+ print ns
+ print "Please input a description for the general statistics features"
+ c2j.data2json(ns)
+
+
+
diff -r 000000000000 -r e34cf1b6fe09 collection_analysis/tools/vampstats_pitch_weighted.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/collection_analysis/tools/vampstats_pitch_weighted.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,172 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+# creates a histogram from given input files or folder
+
+__author__="Daniel Wolff, Dan"
+__date__ ="$11-Feb-2015 18:18:47$"
+
+import sys
+import os
+import csv
+import numpy
+import csv2json as c2j
+import re
+
+
+# global feature extensions
+#ext = tuple([".n3",".csv",".mid"])
+ext = tuple([".csv"])
+
+floater = re.compile("((\d+)(.\d+)*)")
+# reads in any csv and returns a list of structure
+# time(float), data1, data2 ....data2
+def read_vamp_csv(filein = '', datapos = 0):
+ output = []
+ badcount = 0
+ with open(filein, 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ if len(row) >= datapos + 2:
+ output.append([float(row[0])] + row[1:])
+ else:
+ badcount += 1
+ print "Ignored " + str(badcount) + " short rows"
+ return output
+
+#calculates the histogram
+def histogram(data, datapos = 1, nbins = -1):
+
+ # symbols or numerical input?
+ if not nbins == -1:
+
+ #convert to numpy data\
+ ddata = string2numpy(data,datapos)
+
+ # get time weights
+ tw_data = string2numpy(data,2)
+
+ # get loudness weights
+ lw_data = string2numpy(data,3)
+
+ count,index = numpy.histogram(ddata,nbins-1, weights=numpy.multiply(tw_data,lw_data))
+ count = count.tolist()
+ index = index.tolist()
+
+ # here for strings
+ else:
+ # build histogram on strings
+ histo = dict()
+ for row in data:
+ histo[row[datapos+1]] = histo.get(row[datapos+1], 0) + 1
+ index = histo.keys()
+ count = histo.values()
+
+ # return histogram
+ return {"count":count, "index":index}
+
+#calculates statistics for numerical input
+def numstats(data,datapos):
+
+ #convert to numpy data
+ ddata = string2numpy(data,datapos)
+
+ avg = numpy.average(ddata).tolist()
+ med = numpy.median(ddata).tolist()
+ std = numpy.std(ddata).tolist()
+
+ # return data
+ return {"average": avg, "median": med, "std": std}
+
+def featurefilesinpath(path):
+ # ---
+ # we traverse the file structure
+ # and list files to copy
+ # ---
+ files = []
+ for (dirpath, dirnames, filenames) in os.walk(path):
+ for file in filenames:
+ # we copy all requested files and the transform files as well!
+ if (file.endswith(ext)):
+ source = os.path.join(dirpath, file).replace('\\','/')
+ files.append(source)
+ return files
+
+# convert to numpy
+def string2numpy(data,datapos):
+ try:
+ ddata = numpy.array(data, dtype=float)[:, datapos+1]
+ except:
+ edata = []
+ for row in data:
+ #edata.append(float(floater.match(row[datapos+1]).group(1)))
+ m = re.search("[a-zA-Z]", row[datapos+1])
+ if m is not None:
+ # take onlly the specified column datapos+1
+ edata.append(row[datapos+1][:(m.start()-1)])
+ else:
+ # take onlly the specified column datapos+1
+ edata.append(row[datapos+1])
+ ddata = numpy.array(edata,dtype=float)
+ return ddata
+
+# main entry point
+if __name__ == "__main__":
+ print "Usage: vampstats datapos nbins file1/dir1 file2/dir2 ...."
+ print "datapos: column of data after timecode to process"
+ print "nbins: -1 for categorical data, otherwise number of bins for histogram"
+
+ datapos = int(sys.argv[1])
+ nbins = int(sys.argv[2])
+
+ # check and collate files
+ files = []
+ for path in sys.argv[3:]:
+ if os.path.isdir(path):
+ files.extend(featurefilesinpath(path))
+ else:
+ if os.path.isfile(path):
+ files.extend(path)
+ print "Number of files now loading: " + str(len(files))
+
+ # we collate all data first and then count.
+ # @todo: read all files and create dictionary first for large tasks
+ data = []
+ for file in files:
+ print file
+ data.extend(read_vamp_csv(file, datapos))
+
+ print "Total data size in memory: " + str(sys.getsizeof(data))
+
+ # now get the histogram for all data
+ histo = histogram(data,datapos,nbins)
+ print histo
+ print "Please input a description for the histogram analysis features"
+ c2j.data2json(histo)
+
+ # further numerical analysis if this is not categorical data
+ if not nbins == -1:
+ ns = numstats(data,datapos)
+ print ns
+ print "Please input a description for the general statistics features"
+ c2j.data2json(ns)
+
+
+
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/benchmark/start_benchmark.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/benchmark/start_benchmark.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,90 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-#
+
+#
+# This is a benchmark that funs calculations
+# on distributed nodes and reports the
+# average time the calculation needs.
+# Used to determine efficitncy of virtualisation
+#
+
+# parameters
+import sys
+import time
+import numpy
+from IPython.parallel import Client
+#import numpy as np
+
+def approxPi(error):
+ def nth_term(n):
+ return 4 / (2.0 * n + 1) * (-1) ** n
+ prev = nth_term(0) # First term
+ current = nth_term(0) + nth_term(1) # First + second terms
+ n = 2 # Starts at third term
+
+ while abs(prev - current) > error:
+ prev = current
+ current += nth_term(n)
+ n += 1
+
+ return current
+
+
+def main(niters = 1,hardness = 1e-7):
+ # connect to client
+ i = 0
+ while i< 5 :
+ try :
+ i = i+1
+ rc = Client()
+ nb_core = numpy.size(rc.ids)
+ lview = rc.load_balanced_view()
+ lview.block = True
+ dview = rc[:]
+ dview.block = True
+ break
+ except Exception :
+ print 'Client not started yet (Waiting for 5 sec...)'
+ time.sleep(5)
+ if i == 5 :
+ return 'Cannot connect to cluster'
+ time.sleep(2)
+
+
+ #with dview.sync_imports():
+ # import start_benchmark
+
+ print 'Benchmarking pi approximation on ' + str(nb_core) + ' engines.'
+ ticc = time.clock()
+ tic = time.time()
+ result = numpy.mean(lview.map(approxPi,numpy.ones(nb_core*niters) * hardness))
+ toc = time.time()-tic
+ tocc = time.clock()-ticc
+ print 'Result: ' + str(result)
+ print 'Time used: ' + str(toc)
+ print 'CPU Time passed: ' + str(nb_core) + "*" + str(toc)
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 3:
+ main(sys.argv[1],sys.argv[2])
+ else:
+ main()
+
+
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/sonic_annotator_vamp.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/sonic_annotator_vamp.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,149 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+from __future__ import division
+
+import matplotlib.pyplot as plt
+import numpy as np
+import sys
+import shutil
+import os
+import errno
+import subprocess
+import time
+import random
+import hashlib
+
+
+# uses a separate console process to achieve the file conversion
+def vamp_host_process(argslist):
+ #"""Call sonic annotator"""
+
+ vamp_host = 'sonic-annotator'
+ command = [vamp_host]
+ command.extend(argslist)
+
+ # which sa version?
+ #p = subprocess.Popen([vamp_host, '-v'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ #print(p.stdout.read())
+
+
+ #stdout = subprocess.check_output(command, stderr=subprocess.STDOUT)
+ p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ text = p.stdout.read()
+ retcode = p.wait()
+ if (retcode==0):
+ print "Finished "
+ return 1
+ else:
+ print "Error " + text
+ return text
+
+ # time.sleep(random.random()*1.0)
+ #res = subprocess.call(command)
+ return res
+
+
+# processes the given file using a vamp plugin and sonic annotator
+# @param string wav_file file to be processed
+def transform((wav_file, transform_file, hash, out_path)):
+
+ # get filename of transform
+ tpath = os.path.split(transform_file)
+
+ # prepare output directory:
+ # get audio file subpath
+ # create directory _Features for output if doesnt exist
+ spath = os.path.split(wav_file)
+
+ if out_path == '':
+ featpath = spath[0] + '/_Analysis/' + tpath[1] + "_" + hash[:5]
+ else:
+ folders = spath[0].split('/')
+ featpath = out_path + folders[-1] + '/' + tpath[1] + "_" + hash[:5]
+
+ #if not os.path.exists(featpath):
+ # os.makedirs(featpath)
+ print 'Creating directory' + featpath
+ try:
+ os.makedirs(featpath)
+ except OSError as exception:
+ if exception.errno!= errno.EEXIST:
+ raise
+
+ # copy transform file into directory
+ try:
+ shutil.copy(transform_file, featpath + '/' + tpath[1][:-3] + '_' + hash[:5] + '.n3')
+ except OSError as exception:
+ v = 2 # dummy statement
+
+ #./sonic-annotator -t silvet_settings.n3 input.wav -w csv
+ # prepare arguments
+
+ # this is the standard output for now
+ args = ['-t', transform_file, wav_file, '-w', 'csv', '-w', 'rdf',
+ '--rdf-basedir',featpath,'--csv-basedir',featpath, '--rdf-many-files', '--rdf-append']
+
+ # csv only
+ # args = ['-t', transform_file, wav_file, '-w', 'csv','--csv-basedir',featpath]
+
+ # rdf only
+ # args = ['-t', transform_file, wav_file, '-w', 'rdf','--rdf-basedir',featpath,
+ # '--rdf-many-files', '--rdf-append']
+
+ # ---
+ # below would also output midi
+ # @todo: make language, e.g. bnased on dictionaries, that defines vamp parameters per plugin
+ # ---
+ #args = ['-t', transform_file, wav_file, '-w', 'csv', '-w', 'rdf', '-w', 'midi',
+ # '--rdf-basedir',featpath,'--csv-basedir',featpath, '--rdf-many-files', '--rdf-append']
+ #args = ['-t', transform_file, wav_file, '-w', 'csv', '--csv-force','--csv-basedir',featpath]
+
+
+ print "Analysing " + wav_file
+
+ result = vamp_host_process(args)
+ # execute vamp host
+ return [wav_file, result]
+
+# entry function only for testing
+# provide filename, uses fixed transform
+if __name__ == "__main__":
+
+ transform_file = 'silvet_settings_fast_finetune_allinstruments.n3'
+
+ # get transform hash
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_file, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ hash = str(hasher.hexdigest())
+ transform(sys.argv[1])
+
+ if len(sys.argv) >= 2:
+ wav_file = sys.argv[1]
+ else:
+ wav_file = 'sweep.flac'
+
+ transform((wav_file,transform_file,hash))
+
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/test_sonic_annotator_notimeside.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/test_sonic_annotator_notimeside.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,144 @@
+# Part of DML (Digital Music Laboratory)
+# Copyright 2014-2015 Daniel Wolff, City University
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/local/spark-1.0.0-bin-hadoop2/bin/spark-submit
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+__date__ ="$11-Jul-2014 15:31:01$"
+import sys
+import time
+import os
+import hashlib
+from IPython.parallel import Client
+
+# this is the main routine to be submmitted as a spark job
+#
+#
+# Running python applications through ./bin/pyspark is deprecated as of Spark 1.0.
+# Use ./bin/spark-submit --py-files sonic_annotator_vamp.py
+# you can also provide a zip of all necessary python files
+#
+# @param string audiopath root of the folder structure to be traversed
+# @param string transform_file path to the .n3 turtle file describing the transform
+#def main(audiopath = './',
+# transform_file = 'silvet_settings.n3',
+# masterip = '0.0.0.0):
+
+def main(audiopath, transform_path, out_path = ''):
+ print "iPCluster implementation for Vamp processing"
+
+ # ---
+ # initialise ipcluster
+ # ---
+ #time.sleep(20)
+ rc = Client()
+ nb_core = len(rc.ids)
+ lview = rc.load_balanced_view()
+ lview.block = False # asynch now
+ dview = rc[:]
+ dview.block = True
+
+ # import libraries
+ with dview.sync_imports():
+ import sys
+ import os
+ import sonic_annotator_vamp
+
+ # here traverse the file structure
+ data = []
+ count = 0
+ for (dirpath, dirnames, filenames) in os.walk(audiopath):
+ for file in filenames:
+ print '\rChecked %d files' % (count),
+ count = count + 1
+ if file.endswith(".wav") or file.endswith(".mp3") or file.endswith(".flac"):
+ data.append(os.path.join(dirpath, file).replace('\\','/'))
+ # count jobs
+ njobs = len(data)
+
+
+ # we now allow
+ if transform_path.endswith(".n3"):
+ transform_files = [transform_path]
+ else:
+ transform_files = []
+ for file in os.listdir(transform_path):
+ if file.endswith(".n3"):
+ transform_files.append(transform_path + file)
+
+ for transform_file in transform_files:
+ # get transform hash
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_file, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ hash = str(hasher.hexdigest())
+
+ # create action containing data and parameter file
+ action = [(x,transform_file,hash,out_path) for x in data]
+
+ # output the current task
+ tpath = os.path.split(transform_file)
+ print "Using " + tpath[1] + " on " + str(njobs) + " files"
+
+ # ---
+ # do the work!
+ # ---
+ ar = lview.map(sonic_annotator_vamp.transform, action)
+
+ # asynch process output
+ tic = time.time()
+ while True:
+
+ # update time used
+ toc = time.time()-tic
+
+ # update progress
+ msgset = set(ar.msg_ids)
+ completed = len(msgset.difference(rc.outstanding))
+ pending = len(msgset.intersection(rc.outstanding))
+
+ if completed > 0:
+ timerem = ((toc/completed) * pending) / 3600.0
+ print '\rRunning %3.2f hrs: %3.2f percent. %d done, %d pending, approx %3.2f hrs' % (toc / 3600.0, completed/(pending+completed*1.0) * 100.0,completed, pending, timerem),
+
+ if ar.ready():
+ print '\n'
+ break
+ time.sleep(1)
+
+ toc = time.time()-tic
+ #print ar.get()
+ print '\rProcessed %d files in %3.2f hours.' % (njobs,toc / 3600.0)
+ print '\n'
+
+ # output
+ #print(result)
+ #thefile = open(audiopath + tpath[1] + '.txt', 'w')
+ #for item in result:
+ # thefile.write("%s\n" % item)
+ #close(thefile)
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 3:
+ main(sys.argv[1],sys.argv[2])
+ else:
+ main(audiopath = '/audio', transform_path = 'dml_processing/sonic_annotator/vamp_plugins/bbc_speechmusic.n3', out_path = './')
+
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/tools/copy_features.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/tools/copy_features.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,120 @@
+#!/usr/local/python
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+import os
+import sys
+import time
+import shutil
+import hashlib
+import re
+import errno
+
+
+# this copies features which used to be stored with the audio
+# usage: copy_features source_path dest_path transform_substr ext move?
+# params:
+# transfrom_substr can be either a substring or hash of the transform file
+# or the filename.n3 of a transform file
+#
+# e.g. : D:\tools>python copy_features.py "D:/_Audio/" "D:/Chord_Analysis/" "1a812" ".csv" 0
+
+def main(audiopath, out_path, transform_substr = "", ext = "", move = 0):
+ # check move input
+ if move == 1:
+ print "Move script for VAMP features on the BL server"
+ v = raw_input("Do you really want to move files? (y/n)")
+ if not (v == 'y'):
+ return
+ else:
+ print "Copy script for VAMP features on the BL server"
+
+ # replace transform_substr by pluginhash
+ if transform_substr.endswith (".n3"):
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_substr, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ transform_substr = str(hasher.hexdigest())
+
+ # define valid extensions
+ if not ext:
+ ext = tuple([".n3",".csv",".mid"])
+ else:
+ ext = tuple([ext])
+
+ # ---
+ # we traverse the file structure
+ # and list files to copy
+ # ---
+ data = []
+ count = 0
+ count2 = 0
+ for (dirpath, dirnames, filenames) in os.walk(audiopath):
+ for file in filenames:
+ print '\rChecked %d, gathered %d files' % (count, count2),
+ count += 1
+
+ # we copy all requested files and the transform files as well!
+ if (file.endswith(ext) or (transform_substr and (transform_substr in file))) and (transform_substr in dirpath):
+ source = os.path.join(dirpath, file).replace('\\','/')
+ data.append(source)
+ count2 +=1
+
+ # count jobs
+ njobs = len(data)
+ print '\nAbout to copy or move %d files' % (njobs)
+
+
+ count = 0
+ # copy individual items
+ for x in data:
+ spath = os.path.split(x)
+ folders = spath[0].split('/')
+
+ # we remove the first folder that contains "_Analysis" from the path
+ max_depth = -3
+ skip = -2
+ for i in range(1, -max_depth+1):
+ if "_Analysis" in folders[-i]:
+ skip = -i
+ break
+
+ u = [folders[j] for j in range(max_depth,skip) + range(skip+1,0)]
+ featpath = out_path + '/'.join([folders[j] for j in range(max_depth,skip) + range(skip+1,0)])
+
+ # create the target folder
+ try:
+ os.makedirs(featpath)
+ except OSError as exception:
+ if exception.errno!= errno.EEXIST:
+ raise
+
+ # copy stuff
+ try:
+ dest = featpath + '/' + spath[1]
+ if move == 1:
+ #print '\rMoving %s to %s' % (x,dest)
+ shutil.move(x, dest )
+ else:
+ #print '\rCopying %s to %s' % (x,dest)
+ shutil.copy(x, dest )
+ count = count + 1
+ except:
+ continue
+ # progress indicator
+ print '\r%3.2f percent. %d done, %d pending' % (count/(njobs*1.0) * 100.0,count, njobs-count),
+
+ print '\rCopied %d of %d files.' % (count,njobs)
+ print '\n'
+
+
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 5:
+ main(sys.argv[1],sys.argv[2], sys.argv[3],sys.argv[4], int(sys.argv[5]))
+ else:
+ main(audiopath = 'D:/_Audio/',out_path = 'D:/_Audio_Analysis/', transform_substr = "", ext = "")
+
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/tools/copy_features_folderwise.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/tools/copy_features_folderwise.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,118 @@
+#!/usr/local/python
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+import os
+import sys
+import time
+import shutil
+import hashlib
+import re
+import errno
+
+
+# this copies features which used to be stored with the audio
+# usage: copy_features source_path dest_path transform_substr ext move?
+# params:
+# transfrom_substr can be either a substring or hash of the transform file
+# or the filename.n3 of a transform file
+#
+# e.g. : D:\tools>python copy_features_folderwise.py "D:/_Audio/" "D:/Chord_Analysis/" "1a812" 0
+
+def main(audiopath, out_path, transform_substr = "", move = 0):
+ # check move input
+ if move == 1:
+ print "Move script for VAMP features on the BL server"
+ v = raw_input("Do you really want to move folders? (y/n)")
+ if not (v == 'y'):
+ return
+ else:
+ print "Copy script for VAMP features on the BL server"
+
+ # replace transform_substr by pluginhash
+ if transform_substr.endswith (".n3"):
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_substr, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ transform_substr = str(hasher.hexdigest())
+
+ # ---
+ # we traverse the file structure
+ # and list files to copy
+ # ---
+ data = []
+ count = 0
+ count2 = 0
+ for (dirpath, dirnames, filenames) in os.walk(audiopath):
+ for dir in dirnames:
+ print '\rChecked %d, gathered %d folders' % (count, count2),
+ count += 1
+
+ # we copy all requested files and the transform files as well!
+ if (transform_substr and (transform_substr in dir)):
+ source = os.path.join(dirpath, dir).replace('\\','/')
+ data.append(source)
+ count2 +=1
+
+ # count jobs
+ njobs = len(data)
+ print '\nAbout to copy or move %d directories' % (njobs)
+
+
+ count = 0
+ # copy individual items
+ for x in data:
+
+ spath = os.path.split(x)
+ folders = spath[0].split('/')
+
+ # if exists, we remove the first folder
+ # which contains "_Analysis" from the path
+ # maxdepth contains the maximum depth of folders structure to keep,
+ # counted from the most specific folder level
+ max_depth = -2
+ skip = -2
+ for i in range(1, -max_depth+1):
+ if "_Analysis" in folders[-i]:
+ skip = -i
+ break
+
+ folderbase = [folders[j] for j in range(max_depth,skip) + range(skip+1,0)]
+ featpath = out_path + '/'.join(folderbase)
+
+ # create the target folder
+ try:
+ os.makedirs(featpath)
+ except OSError as exception:
+ if exception.errno!= errno.EEXIST:
+ raise
+
+ # copy stuff
+
+ dest = featpath + '/' + spath[1]
+ if move == 1:
+ #print '\rMoving %s to %s' % (x,dest)
+ shutil.move(x, dest )
+
+ if move == 0:
+ #print '\rCopying %s to %s' % (x,dest)
+ shutil.copytree(x, dest )
+
+ count = count + 1
+
+ # progress indicator
+ print '\r%3.2f percent. %d done, %d pending' % (count/(njobs*1.0) * 100.0,count, njobs-count),
+
+ print '\rCopied %d of %d folders.' % (count,njobs)
+ print '\n'
+
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 5:
+ main(sys.argv[1],sys.argv[2], sys.argv[3], int(sys.argv[4]))
+ else:
+ main(audiopath = 'D:/_Audio/',out_path = 'D:/_Audio_Analysis/', transform_substr = "", ext = "")
+
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/tools/turtle2rdf.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/tools/turtle2rdf.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,19 @@
+# this file transforms an n3 file to rdf
+
+import rdflib
+
+def rdf2n3(in,out):
+ # create empty graph
+ g = rdflib.Graph
+
+ # import data
+ result = g.parse(in, format="n3")
+ result = g.serialise(out)
+ return result
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 3:
+ rdf2n3(sys.argv[1],sys.argv[2])
+ else:
+ print "Usage: rdf2n3 filein.n3 fileout.rdf"
+
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 ipcluster/tools/zip_features_folderwise.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ipcluster/tools/zip_features_folderwise.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,113 @@
+#!/usr/local/python
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+import os
+import sys
+import time
+import shutil
+import hashlib
+import re
+import errno
+import subprocess
+
+
+# this copies features which used to be stored with the audio
+# usage: copy_features source_path dest_path transform_substr ext move?
+# params:
+# transfrom_substr can be either a substring or hash of the transform file
+# or the filename.n3 of a transform file
+
+
+def main(audiopath, out_path, transform_substr = "", move = 0):
+
+
+
+ # replace transform_substr by pluginhash
+ if transform_substr.endswith (".n3"):
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_substr, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ transform_substr = str(hasher.hexdigest())
+
+ # ---
+ # we traverse the file structure
+ # and list files to copy
+ # ---
+ data = []
+ count = 0
+ count2 = 0
+ for (dirpath, dirnames, filenames) in os.walk(audiopath):
+ for dir in dirnames:
+ print '\rChecked %d, gathered %d files' % (count, count2),
+ count += 1
+
+ # we copy all requested files and the transform files as well!
+ if (transform_substr and (transform_substr in dir)):
+ source = os.path.join(dirpath, dir).replace('\\','/')
+ data.append(source)
+ count2 +=1
+# if count2 > 1:
+# break
+
+ # count jobs
+ njobs = len(data)
+ print '\r\nAbout to copy %d directories' % (njobs)
+
+
+ count = 0
+ # copy individual items
+ for x in data:
+ spath = os.path.split(x)
+ folders = spath[0].split('/')
+
+ collectionname = folders[-1]
+
+ # create the target folder
+ #try:
+ # os.makedirs(out_path)
+ #except OSError as exception:
+ # if exception.errno!= errno.EEXIST:
+ # raise
+
+ command = '7z -r -mmt=30 -mx5 -v50g -mm=bzip2 a ' + '"' + os.path.join(out_path,collectionname + '_'+ transform_substr + '.zip') + '"' + ' ' + x
+ print command
+ print " Currently zipping " + collectionname
+
+ p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
+ text = p.stdout.read()
+ retcode = p.wait()
+ if not (retcode==0):
+ print "Error: " + text
+
+ #os.system(command)
+ #print 'Removing ' + dest
+ #shutil.rmtree(dest)
+
+ count = count + 1
+
+ # progress indicator
+ print '\r%3.2f percent. %d done, %d pending' % (count/(njobs*1.0) * 100.0,count, njobs-count),
+
+ print '\rCopied %d of %d files.' % (count,njobs)
+ print '\n'
+
+def copytree(src, dst, symlinks=False, ignore=None):
+ for item in os.listdir(src):
+ s = os.path.join(src, item)
+ d = os.path.join(dst, item)
+ if os.path.isdir(s):
+ shutil.copytree(s, d, symlinks, ignore)
+ else:
+ os.makedirs(d)
+ shutil.copy2(s, d)
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 5:
+ main(sys.argv[1],sys.argv[2], sys.argv[3], int(sys.argv[4]))
+ else:
+ main(audiopath = 'D:/_Audio/',out_path = 'D:/_Audio_Analysis/', transform_substr = "", ext = "")
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/Makefile
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/Makefile Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,5 @@
+release:
+ tar czf dml-cla.tar.gz dml-analyser.* n3Parser.py ontologies transforms
+
+unrelease:
+ tar xzf dml-cla.tar.gz
diff -r 000000000000 -r e34cf1b6fe09 pyspark/csvParser.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/csvParser.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,113 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# -*- coding: utf-8 -*-
+__author__="Daniel Wolff"
+
+import codecs
+import warnings
+import numpy
+import csv
+from n3Parser import uri2path
+
+
+# reads csv file into a table,
+# the first column, containing "time" is converted to float, the rest is left at strings
+# data formats are for example:
+# for silvet pitch output:['time','duration','pitch','velocity','label']
+# for qm_vamp_key_standard output: ['time','keynr','label']
+# for qm_vamp_key_standard_tonic output: ['time','keynr','label']
+#
+# data can be nicely traversed:
+# for time, duration,pitch,velocity,label
+def get_array_from_csv(input_f_file):
+
+ output = []
+ badcount = 0
+
+ # keep track of column names
+ ncols = 0
+ with open(uri2path(input_f_file), 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ if ncols == 0:
+ ncols = len(row)
+
+ if len(row) >= ncols:
+ # we assume format time , ...
+ output.append([float(row[0])] + row[1:])
+ else:
+ badcount += 1
+
+ if badcount > 0:
+ warnings.warn("Incomplete csv file, ignoring " + str(badcount) + " entries")
+
+ return output
+
+
+
+
+
+# converts csv input to dictionary with entities named as in "columtype".
+#
+# first value (time) is assumed to be float
+# for silvet pitch output call_
+# csv_to_dict(input_f_file, columtype = ['time','duration','pitch','velocity','label'])
+# for qm_vamp_key_standard output call
+# csv_to_dict(input_f_file, columtype = ['time','keynr','label'])
+# for qm_vamp_key_standard_tonic output call
+# csv_to_dict(input_f_file, columtype = ['time','keynr','label'])
+def get_dict_from_csv(input_f_file, columtype = ['time']):
+
+ output = []
+ badcount = 0
+
+ # keep track of column names
+ ncols = 0
+ with open(uri2path(input_f_file), 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+
+ # initialise the column name
+ if ncols == 0:
+ ncols = len(row)
+
+ # get number of descriptors, and append if left empty
+ ncoldescr = len(columtype)
+ if ncoldescr < ncols:
+ warnings.warn("Column types missing")
+ columtype.extend(['data'+str(i) for i in range(ncoldescr+1, ncols+1)])
+
+ if len(row) == ncols:
+ # parse the csv data into dict
+ rowdict = dict()
+ for i,col in enumerate(columtype):
+ # first value (time) is transformed to float
+ if i == 0:
+ rowdict[col] = float(row[i])
+ else:
+ rowdict[col] = row[i]
+
+ # append dictionary to output
+ output.append(rowdict)
+
+ else:
+ badcount += 1
+
+ if badcount > 0:
+ warnings.warn("Incomplete csv file, ignoring " + str(badcount) + " entries")
+
+ return output
diff -r 000000000000 -r e34cf1b6fe09 pyspark/decode_to_wav.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/decode_to_wav.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,45 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+__author__="wolffd"
+__date__ ="$21-Jul-2014 15:36:41$"
+
+# -*- coding: utf-8 -*-
+#from timeside.decoder import *
+#from timeside.encoder import *
+#import os.path
+#import sys
+# now use a regular timeside installation, e.g. installed by
+#sys.path.append(os.getcwd() + '/../TimeSide/')
+
+from timeside.decoder.file import *
+from timeside.encoder.wav import *
+
+def decode_to_wav(source = 'sweep.flac'):
+ if source[-4:] == ".wav" :
+ dest = source
+ print "already converted: " + dest
+
+ else:
+ dest = source + '.wav'
+ decoder = FileDecoder(source)
+ encoder = WavEncoder(dest, overwrite=True)
+ (decoder | encoder).run()
+ print "decoded: " + dest
+ return dest
diff -r 000000000000 -r e34cf1b6fe09 pyspark/dml-analyser.cfg
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/dml-analyser.cfg Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,3 @@
+[Ontology]
+
+dmlclaOntology_URI = ontologies/dmlclaOntology.n3
diff -r 000000000000 -r e34cf1b6fe09 pyspark/dml-analyser.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/dml-analyser.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,188 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+
+import ConfigParser
+import logging
+import os
+import time
+import shutil
+import argparse
+from os import walk
+import rdflib
+from rdflib import Graph
+from RDFClosure import DeductiveClosure, OWLRL_Semantics
+from transforms.tonicHistogram import find_cla_tonic_histogram, add_tonic_histogram_to_graph
+from transforms.tuningFrequencyStatistics import find_cla_tf_statistics, add_tf_statistics_to_graph
+from transforms.semitoneHistogram import find_cla_semitone_histogram, add_semitone_histogram_to_graph
+from transforms.tonicNormSemitoneHistogram import find_cla_tonic_norm_semitone_histogram, add_tonic_norm_semitone_histogram_to_graph
+
+input_rdf_graph = Graph()
+
+def main():
+
+ # get config
+ config = ConfigParser.ConfigParser()
+ config.read('dml-analyser.cfg')
+
+ # parse dmlcla ontolgy
+ input_rdf_graph.parse(config.get('Ontology', 'dmlclaOntology_URI'), format="n3")
+ DeductiveClosure(OWLRL_Semantics).expand(input_rdf_graph)
+
+ # parse input rdf
+ input_rdf_graph.parse(args.transforms, format="n3")
+ DeductiveClosure(OWLRL_Semantics).expand(input_rdf_graph)
+
+ # initialise output rdf graph
+ output_rdf_graph = Graph()
+
+ # Determine which transforms are to be applied, and
+ # the associated input files
+ transforms = find_transforms_in_n3(input_rdf_graph)
+
+ # Apply the transform(s) to each file and create
+ # rdf results graph
+ output_rdf_graph = execute_transforms(transforms, output_rdf_graph)
+
+ # Write output rdf to stdout
+ print(output_rdf_graph.serialize(format='n3'))
+
+# Loop through all transforms, process the corresponding
+# input files appropriately and add the (RDF) result to output_rdf_graph
+def execute_transforms(transforms, output_rdf_graph):
+
+ transform_iter = transforms.iterkeys()
+ key_histogram = []
+
+ for (transform, transform_type) in transforms:
+
+ input_f_files = transforms.get((transform, transform_type))
+
+ # Add additional clauses to this if statement
+ # for each transform type
+ if transform_type == rdflib.term.URIRef(u'http://dml.org/dml/cla#CollectionLevelTonic'):
+
+ (tonic_histogram, sample_count) = find_cla_tonic_histogram(input_f_files)
+ output_rdf_graph = add_tonic_histogram_to_graph(tonic_histogram, output_rdf_graph, transform, sample_count, input_f_files)
+
+ elif transform_type == rdflib.term.URIRef(u'http://dml.org/dml/cla#CollectionLevelTuningFrequencyStatistics'):
+
+ statistics, sample_count = find_cla_tf_statistics(input_f_files)
+ output_rdf_graph = add_tf_statistics_to_graph(statistics, output_rdf_graph, transform, sample_count, input_f_files)
+
+ elif transform_type == rdflib.term.URIRef(u'http://dml.org/dml/cla#CollectionLevelSemitone'):
+
+ (semitone_histogram, sample_count) = find_cla_semitone_histogram(input_f_files)
+ output_rdf_graph = add_semitone_histogram_to_graph(semitone_histogram, output_rdf_graph, transform, sample_count, input_f_files)
+
+ elif transform_type == rdflib.term.URIRef(u'http://dml.org/dml/cla#CollectionLevelTonicNormSemitone'):
+
+ (tonic_norm_semitone_histogram, sample_count) = find_cla_tonic_norm_semitone_histogram(input_f_files, input_rdf_graph)
+ output_rdf_graph = add_tonic_norm_semitone_histogram_to_graph(tonic_norm_semitone_histogram, output_rdf_graph, transform, sample_count, input_f_files, input_rdf_graph)
+
+ return output_rdf_graph
+
+# Find all transforms, and their associated input files,
+# from rdf_graph
+def find_transforms_in_n3(rdf_graph):
+
+ qres = rdf_graph.query(
+ """prefix dml:
+ SELECT ?transform ?dml_input ?transform_type
+ WHERE {
+ ?transform a dml:Transform .
+ ?transform dml:input ?dml_input .
+ ?transform dml:type ?transform_type .
+ }""")
+
+ transforms = dict()
+
+ for row in qres:
+
+ transform_bnode = row.transform
+ dml_input = row.dml_input
+ transform_type = row.transform_type
+
+ if transforms.has_key((transform_bnode, transform_type)):
+
+ transform_key = transforms.get((transform_bnode, transform_type))
+ transform_key.append(dml_input)
+
+ else:
+
+ transforms[(transform_bnode, transform_type)] = [dml_input]
+
+ return transforms
+
+# Determine the mapping between feature file URIs and
+# their source audio file URIs
+def map_audio_to_feature_files():
+
+ # Loop through audio files
+ lines = [line.strip() for line in args.audio_files]
+
+ for audio_file in lines:
+
+ print "sonic-annotator -T " + args.transforms + " --rdf-basedir " + args.basedir + " <" + audio_file + ">"
+
+ audio_to_feature_file_dict = dict()
+
+ for (dirpath, dirnames, filenames) in walk(args.basedir):
+ for file in filenames:
+
+ print "found file: " + file
+
+ if file.endswith(".n3"):
+
+ print "found n3 file: " + file
+
+ # open and parse n3 file
+ rdf_graph = Graph()
+ rdf_graph.parse(os.path.join(dirpath, file), format="n3")
+
+ # find subject in ?subject a mo:AudioFile
+ qres = rdf_graph.query(
+ """SELECT ?audio_file
+ WHERE {
+ ?audio_file a mo:AudioFile .
+ }""")
+
+ print len(qres)
+
+ for row in qres:
+
+ print("audio file URI is %s" % row.audio_file.n3())
+ print("feature file URI is %s" % os.path.join(os.getcwd(), dirpath, file))
+ audio_to_feature_file_dict[row.audio_file.n3()] = os.path.join(os.getcwd(), dirpath, file)
+
+ # add full file URI, subject to dict
+
+ print audio_to_feature_file_dict
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument("-T", "--transforms", help="the URI of an n3 (RDF) file describing one or more transforms, and the files to which they should be applied")
+ parser.add_argument("-b", "--basedir", help="the URI of the base output directory")
+
+ args = parser.parse_args()
+
+ main()
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/dml-cla.tar.gz
Binary file pyspark/dml-cla.tar.gz has changed
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ilm/assetDB.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ilm/assetDB.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,229 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/env python
+# encoding: utf-8
+"""
+assetDB.py
+
+Created by George Fazekas on 2012-01-16. Modifications by Mathieu Barthet in 2013-12,
+Steven Hargreaves 22/12/2014.
+Copyright (c) 2013 . All rights reserved.
+"""
+
+import sys,os,logging
+import sqlalchemy as sal
+from sqlalchemy.ext.declarative import declarative_base
+from sqlalchemy import Column, Integer, String, Sequence, Enum
+from sqlalchemy.orm import sessionmaker
+from sqlalchemy.dialects import mysql
+from hashlib import md5
+
+class assetDB(object):
+
+ asset_types = ['wav','mpeg/320kbps','mpeg/64kbps']
+ extensions = ['wav','mp3','mp3']
+ ext = dict(zip(asset_types,extensions))
+
+ def __init__(self, prefix, pref=list(), config=None):
+ self.log = logging.getLogger('spark_feat_extract')
+ self.log.info("ORM Version: %s",sal.__version__)
+ self.config = config
+ self.session = None
+ self.Assets = None
+ self.prefix = prefix
+ if pref :
+ self.asset_prefs = pref
+ else :
+ self.asset_prefs = assetDB.asset_types
+ # reporting errors:
+ self.found_different_asset_type = 0
+ self.errata_file = None
+ if config and hasattr(config,"db_errata_file") :
+ self.errata_file = config.db_errata_file
+
+
+ def connect(self,echo=False):
+ '''Connect to the MySQL database and create a session.'''
+ URL = "mysql://%s:%s@%s/%s" %(self.config.get('Commercial Asset Database', 'user'),self.config.get('Commercial Asset Database', 'passwd'),self.config.get('Commercial Asset Database', 'host'),self.config.get('Commercial Asset Database', 'name'))
+ self.log.info("Connecting to database server at: %s",URL.replace(self.config.get('Commercial Asset Database', 'passwd'),'*****'))
+ engine=sal.create_engine(URL, echo=echo)
+ Session = sessionmaker(bind=engine)
+ self.session = Session()
+ self.log.debug("MySQL session created successfully.")
+ return self
+
+ def close(self):
+ '''Close the database session'''
+ if self.session :
+ self.session.close()
+ self.log.info("Database closed.")
+ return self
+
+ def create_mapper(self):
+ '''Create an Object-Relational Mapper'''
+ Base = declarative_base()
+ class Assets(Base):
+ #change
+ #__tablename__ = 'assets'
+ __tablename__ = self.config.get('Commercial Asset Database', 'tablename')
+ # map all table columns to variables here, e.g.
+ # album_id = Column(Integer, primary_key=True)
+ # song_title = Column(String)
+ # genre_id = Column(Integer)
+ self.Assets = Assets
+ return self
+
+ def get_assets(self,start=0,limit=10,asset_type='audio/x-wav'):
+ '''Returns some assets from the database.
+ If the path given by the specified asset type does not exists,
+ try to find the assets given the preference list provided in self.asset_prefs.
+ If no valid path can be found for an asset, log the error and yield None for the path.'''
+ # limit = start + limit # this changes the semantics of the SQL limit
+
+ # create the ORM mapper object if doesn't exist
+ if self.Assets == None :
+ self.create_mapper()
+
+ # generate an SQL query and for each asset in the results, yield a (validated) path name for the asset, or yield None if not found
+ for asset in self.session.query(self.Assets)[start:limit]:
+ path = self.generate_path(asset,asset_type)
+ if self.validate_path(path) and self.validate_size(path,asset_type):
+ yield path,asset
+ elif not self.asset_prefs :
+ self.log.error("Requested file for asset not found: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ yield None,asset
+ else :
+ #change
+ self.log.warning("Requested file for asset bad or not found: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ self.log.warning("Trying other asset types.")
+ path = self.find_preferred_asset_path(asset)
+ if path == None :
+ yield None,asset
+ else :
+ yield path,asset
+ # ensure each asset yields only once
+ pass
+ pass
+
+ def get_assets_by_genre(self,genre_id,start=0,limit=10,asset_type='audio/x-wav'):
+ '''Returns some assets of the given genre_id from the database.
+ If the path given by the specified asset type does not exists,
+ try to find the assets given the preference list provided in self.asset_prefs.
+ If no valid path can be found for an asset, log the error and yield None for the path.'''
+ # limit = start + limit # this changes the semantics of the SQL limit
+
+ # create the ORM mapper object if doesn't exist
+ if self.Assets == None :
+ self.create_mapper()
+
+ # generate an SQL query and for each asset in the results, yield a (validated) path name for the asset, or yield None if not found
+ for asset in self.session.query(self.Assets).filter(self.Assets.genre_id == genre_id).all()[start:limit]:
+ path = self.generate_path(asset,asset_type)
+ if self.validate_path(path) and self.validate_size(path,asset_type):
+ yield path,asset
+ elif not self.asset_prefs :
+ self.log.error("Requested file for asset not found: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ yield None,asset
+ else :
+ #change
+ self.log.warning("Requested file for asset bad or not found: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ self.log.warning("Trying other asset types.")
+ path = self.find_preferred_asset_path(asset)
+ if path == None :
+ yield None,asset
+ else :
+ yield path,asset
+ # ensure each asset yields only once
+ pass
+ pass
+
+ def find_preferred_asset_path(self,asset):
+ '''Iteratively find a path name for each asset type in asset_prefs and return the first one available.
+ Return None if not found and log this event for error management.'''
+ path = unicode()
+ for asset_type in self.asset_prefs :
+ path = self.generate_path(asset,asset_type)
+ if self.validate_path(path):
+ self.log.info("Asset found but type is different from requested: %s. (Album ID: %i) ",asset.song_title,asset.album_id)
+ self.append_db_errata(path,"Found different asset type for problem case. (%s)"%asset_type)
+ self.found_different_asset_type += 1
+ if self.validate_size(path,asset_type):
+ return path
+ else :
+ self.log.error("Requested file for asset is worng size, probably corrupt: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ continue
+ else:
+ self.append_db_errata(path,"File not found.")
+ if len(path) == 0 :
+ self.log.warning("Asset not found for: %s. (Album ID: %i)",asset.song_title,asset.album_id)
+ return None
+
+ def generate_path(self,asset,asset_type):
+ '''Generate the path name given a asset database object and a requested asset type'''
+ path = '' # need to generate audio file path here
+ return path
+
+ def validate_path(self,path):
+ '''Validate the generated path name.'''
+ return os.path.isfile(path)
+
+ def validate_size(self,path,asset_type):
+ '''Check if the file size makes sense.'''
+ size = -1
+ try :
+ size = int(os.path.getsize(path))
+ except Exception, e:
+ self.append_db_errata(path,"Unable to determine file size.")
+ self.log.error("Unable to determine file size: %s." %path)
+ self.log.error("Exception %s."%str(e))
+ return False
+ if size == 0 :
+ self.append_db_errata(path,"File has zero size.")
+ self.log.error("File has zero size: %s."%path)
+ return False
+ if 'wav' in asset_type :
+ # rationale: with very small files some feature extractor plugins fail or output junk
+ if size > 209715200 or size < 209715 :
+ self.append_db_errata(path,"Rejected file size is: %f KB" %(size/1024.0))
+ return False
+ if 'mpeg' in asset_type :
+ # same assuming about 1:10 compression
+ if size > 41943040 or size < 65536 :
+ self.append_db_errata(path,"Rejected file size is: %f KB" %(size/1024.0))
+ return False
+ return True
+
+ def get_different_asset_no(self):
+ '''Return a count of the cases where the preferred asset type was not found'''
+ return self.found_different_asset_type
+
+ def reset_different_asset_no(self):
+ '''Reset the asset type was not found counter'''
+ self.found_different_asset_type = 0
+
+ def append_db_errata(self,filename,reason,metadata=""):
+ '''Append to a file collecting assets present in the DB but not found on disk.'''
+ if not self.errata_file : return False
+ try :
+ with open(self.errata_file,"a+") as ef:
+ if metadata :
+ ef.write("%(filename)s,%(reason)s,%(metadata)s\n"%locals())
+ else:
+ ef.write("%(filename)s,%(reason)s\n"%locals())
+ except:
+ self.log.error("Failed to append database errata.")
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ilm/server.cfg
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ilm/server.cfg Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,28 @@
+[Files]
+
+[Audio Files]
+audio-file-size-limit = 100
+audio-prefix = /media
+
+[Sonic Annotator]
+output-dir = /media/data
+vamp-transform-list = transform_list.txt
+
+[Commercial Asset Database]
+host = localhost
+user = ???
+passwd = ???
+name = ???
+tablename = ???
+
+[Queries]
+sql-start = 0
+sql-limit = 1000000
+genre-id = 16
+
+[Spark]
+num-cores = 20
+memory = 100g
+
+[Application]
+array-step-size = 1000
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ilm/spark_feat_extract.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ilm/spark_feat_extract.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,158 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/local/spark-1.0.0-bin-hadoop2/bin/spark-submit
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+
+from assetDB import assetDB
+from pyspark import SparkConf, SparkContext
+import ConfigParser
+import logging
+from transform import *
+import os
+import time
+import shutil
+
+def main():
+ start_complete = time.time();
+
+ # get config
+ config = ConfigParser.ConfigParser()
+ config.read('server.cfg')
+
+ #vamp_transform = [config.get('Sonic Annotator', 'vamp-transform')]
+ vamp_transform_list = config.get('Sonic Annotator', 'vamp-transform-list')
+ genre_id = config.getint('Queries', 'genre-id')
+
+ output_dir = config.get('Sonic Annotator', 'output-dir')
+ ltime = time.localtime()
+ output_dir = output_dir + '_' + str(ltime.tm_mday) + '_' + str(ltime.tm_mon) + '_' + str(ltime.tm_year)
+ output_dir = output_dir + '_' + str(ltime.tm_hour) + str(ltime.tm_min) + '_' + str(ltime.tm_sec)
+ output_dir = output_dir + '_genre_id_' + str(genre_id)
+ # create output directory, if it doesn't exist
+ if not os.access(output_dir, os.F_OK):
+ os.makedirs(output_dir)
+
+ # copy vamp_transform_list file to output directory
+ shutil.copy(vamp_transform_list, output_dir)
+
+ # create logger
+ #logger = logging.getLogger('spark_feat_extract')
+ logger = logging.getLogger('spark_feat_extract')
+ logger.setLevel(logging.DEBUG)
+
+ # create file handler and set level to debug
+ fh = logging.FileHandler(output_dir + "/ilm.assets.spark.features.log")
+ fh.setLevel(logging.DEBUG)
+
+ # create formatter
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+
+ # add formatter to fh
+ fh.setFormatter(formatter)
+
+ # add fh to logger
+ logger.addHandler(fh)
+
+ logger.info('starting new spark_feat_extract job')
+ logger.info("using vamp transform list: " + vamp_transform_list)
+ logger.info('audio-file-size-limit: ' + config.get('Audio Files', 'audio-file-size-limit'))
+ logger.info("audio-prefix: " + config.get('Audio Files', 'audio-prefix'))
+ logger.info('num-cores: ' + config.get('Spark', 'num-cores'))
+ logger.info("spark memory: " + config.get('Spark', 'memory'))
+ logger.info("genre_id: " + str(genre_id))
+
+ # create a spark context
+ conf = (SparkConf()
+ .setMaster("local[" + config.get('Spark', 'num-cores') + "]")
+ .setAppName("spark feature extractor")
+ .set("spark.executor.memory", "" + config.get('Spark', 'memory') + ""))
+ sc = SparkContext(conf = conf)
+
+ SQL_start = config.getint('Queries', 'sql-start')
+ SQL_limit = config.getint('Queries', 'sql-limit')
+ local_SQL_start = SQL_start
+ logger.info('SQL_start = %i', SQL_start)
+ logger.info('SQL_limit = %i', SQL_limit)
+
+ array_step_size = config.getint('Application', 'array-step-size')
+ logger.info('array-step-size = %i', array_step_size)
+ local_SQL_limit = min(SQL_limit, array_step_size)
+
+ while local_SQL_limit <= SQL_limit:
+
+ # query db for assets (song tracks)
+ db = assetDB(prefix=config.get('Audio Files', 'audio-prefix'),config=config)
+ db.connect()
+
+ data = []
+ logger.info('local_start = %i', local_SQL_start)
+ logger.info('local_SQL_limit = %i', local_SQL_limit)
+
+ for path, asset in db.get_assets_by_genre(genre_id, local_SQL_start, local_SQL_limit):
+ if path == None:
+ logger.warning("Asset not found for: %s. (Album ID: %i Track No: %i)",asset.song_title,asset.album_id,asset.track_no)
+ else:
+ data.append(path)
+
+ db.close
+
+ # If the db query returned no results, stop here
+ if len(data) == 0:
+ break
+
+ batch_output_dir = output_dir + '/batch' + str(local_SQL_start) + '-' + str(local_SQL_limit)
+ os.makedirs(batch_output_dir)
+ logger.info('created results directory ' + batch_output_dir)
+
+ logger.info("calling sc.parallelize(data)...")
+ start = time.time();
+
+ # define distributed dataset
+ distData = sc.parallelize(data)
+ end = time.time();
+ logger.info("finished in " + (str)(end - start))
+
+ logger.info("calling distData.map...")
+ start = time.time();
+
+ # define map
+ m1 = distData.map(lambda x: transform(audio_file=x,
+ vamp_transform_list=vamp_transform_list,
+ output_dir=batch_output_dir))
+ end = time.time();
+ logger.info("finished in " + (str)(end - start))
+
+ logger.info("calling m1.collect()...")
+ start = time.time();
+
+ # collect results
+ theResult = m1.collect()
+
+ end = time.time();
+
+ logger.info("finished in " + (str)(end - start))
+
+ local_SQL_start += array_step_size
+ local_SQL_limit += min(SQL_limit, array_step_size)
+
+ print "finished all in " + (str)(end - start_complete)
+ logger.info("finished all in " + (str)(end - start_complete))
+
+if __name__ == "__main__":
+ main()
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ilm/transform.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ilm/transform.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,45 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+__date__ ="$29-Sep-2014 15:31:01$"
+
+import subprocess
+import time
+import random
+from os import walk
+
+def transform(audio_file, vamp_transform_list, output_dir):
+ print "transforming " + audio_file
+
+ command = ['sonic-annotator']
+ command.extend(['-T', vamp_transform_list,
+ audio_file, '-w', 'rdf', '--rdf-force', '--rdf-basedir', output_dir, '--rdf-many-files',
+ '-w', 'csv', '--csv-force', '--csv-basedir', output_dir])
+
+ print "calling subprocess.subprocess.Popen..."
+ start = time.time()
+ p = subprocess.Popen(
+ command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ p.wait()
+
+ end = time.time()
+ print "finished in " + (str)(end-start)
+
+ return True
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ilm/transform_list.txt
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ilm/transform_list.txt Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,7 @@
+/home/dmluser/vamp_plugins/beatroot_standard.n3
+/home/dmluser/vamp_plugins/tempotracker_tempo_standard.n3
+/home/dmluser/vamp_plugins/tempotracker_beats_standard.n3
+/home/dmluser/vamp_plugins/qm_vamp_key_standard.n3
+/home/dmluser/vamp_plugins/qm_vamp_key_standard_tonic.n3
+/home/dmluser/vamp_plugins/qm-segmentation_standard.n3
+/home/dmluser/vamp_plugins/qm-chromagram_standard.n3
diff -r 000000000000 -r e34cf1b6fe09 pyspark/n3Parser.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/n3Parser.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,74 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+from rdflib import Graph
+from rdflib.plugins.parsers.notation3 import BadSyntax
+import warnings
+import codecs
+import platform
+
+# Load and parse an n3 file
+def get_rdf_graph_from_n3(n3_file_uri):
+
+ graph = Graph()
+
+ try:
+ graph.parse(n3_file_uri, format="n3")
+ except UnicodeDecodeError:
+
+ n3_file_str = uri2path(n3_file_uri)
+ n3_file_iso = codecs.open(n3_file_str, 'r', "iso-8859-1")
+
+ # check if n3 is valid and parse
+ # repair if necessary
+ graph = parse_potentially_corrupt_n3(n3_file_iso.read())
+
+ except (AssertionError, BadSyntax):
+
+ n3_file_str = uri2path(n3_file_uri)
+ n3_file = open(n3_file_str, 'r')
+ graph = parse_potentially_corrupt_n3(n3_file.read())
+
+ return graph
+
+# can parse truncated n3
+def parse_potentially_corrupt_n3(content):
+ feature_graph = Graph()
+ # test if file is complete.
+ # if not, delete the last corrupted entry
+ if not '.' in content[-4:]:
+ warnings.warn("Incomplete rdf file, ignoring last entry")
+ # we find the last correct event
+ lastentry = content.rfind(':event')
+ feature_graph.parse(data=content[:lastentry], format="n3")
+ else:
+ feature_graph.parse(data=content, format="n3")
+
+ return feature_graph
+
+# returns filepath from url
+def uri2path(n3_file_uri):
+
+ n3_file_uri_str = n3_file_uri.__str__()
+
+ # Assume that n3_file_uri_str starts with 'file://' - we need to remove that
+ if 'Win' in platform.system():
+ FILE_URI_START_INDEX = 8
+ else:
+ FILE_URI_START_INDEX = 7
+
+ n3_file_str = n3_file_uri_str[FILE_URI_START_INDEX:len(n3_file_uri_str)]
+ return n3_file_str
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 pyspark/ontologies/dmlclaOntology.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/ontologies/dmlclaOntology.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,205 @@
+@prefix rdfs: .
+@prefix owl: .
+@prefix xsd: .
+@prefix rdf: .
+@prefix dml: .
+@prefix vamp: .
+@prefix dc: .
+@prefix qmplugbase: .
+@prefix silvetplugbase: .
+
+# This file defines an ontology, which also imports the vamp plugin ontology
+ a owl:Ontology;
+ owl:imports .
+
+# Our highest level class is a dml:Transform
+dml:Transform a owl:Class .
+dml:CollectionLevelAnalysis rdfs:subClassOf dml:Transform .
+
+dml:type rdfs:subPropertyOf rdf:type .
+
+vamp:Transform rdfs:subClassOf dml:Transform .
+
+dml:input a owl:ObjectProperty .
+dml:output a owl:ObjectProperty .
+
+# A CollectionLevelTonic is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the qm keydetector vamp plugin.
+dml:CollectionLevelTonic rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTonicInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTonicOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelTonicInput rdfs:subPropertyOf dml:input ;
+ rdfs:range qmplugbase:qm-keydetector_output_tonic .
+
+dml:collectionLevelTonicOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:TonicHistogram .
+
+# A (Tonic) Key Histogram is defined as:
+dml:TonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "tonichistogram" ;
+ dc:title "Tonic Histogram" ;
+ dc:description "Histogram of estimated tonic (from C major = 1 to B major = 12)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 12 ;
+ vamp:bin_names ( "C" "C#" "D" "D#" "E" "F" "F#" "G" "G#" "A" "A#" "B");
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:sample_count ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:cardinality 12] ) .
+
+dml:bin a owl:ObjectProperty ;
+ rdfs:range dml:Bin .
+
+dml:Bin a owl:Class;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_number ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_value ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_name ;
+ owl:minCardinality 0]
+ ) .
+
+dml:bin_number a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+dml:bin_value a owl:DatatypeProperty .
+
+dml:bin_name a owl:DatatypeProperty ;
+ rdfs:range xsd:string .
+
+dml:sample_count a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+# A Key Histogram is defined as:
+dml:KeyHistogram a vamp:DenseOutput ;
+ vamp:identifier "keyhistogram" ;
+ dc:title "Key Histogram" ;
+ dc:description "Histogram of estimated key (from C major = 1 to B major = 12 and C minor = 13 to B minor = 24)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 24 ;
+ vamp:bin_names ( "Cmaj" "C#maj" "Dmaj" "D#maj" "Emaj" "Fmaj" "F#maj" "Gmaj" "G#maj" "Amaj" "A#maj" "Bmaj" "Cmin" "C#min" "Dmin" "D#min" "Emin" "Fmin" "F#min" "Gmin" "G#min" "Amin" "A#min" "Bmin");
+ dml:sample_count [ a xsd:integer] ;
+ owl:equivalentClass [ a owl:Restriction ;
+ owl:onProperty dml:bin ;
+ owl:cardinality 24] .
+
+
+
+# A CollectionLevelTuningFrequencyStatistics is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the silvet transcription plugin.
+dml:CollectionLevelTuningFrequencyStatistics rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelTuningFrequencyStatisticsInput rdfs:subPropertyOf dml:input ;
+ rdfs:range silvetplugbase:silvet_output_notes .
+
+dml:collectionLevelTuningFrequencyStatisticsOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:TuningFrequencyStatistics .
+
+# TuningFrequencyStatistics is defined as:
+dml:TuningFrequencyStatistics a vamp:DenseOutput ;
+ vamp:identifier "tuningfrequencystatistics" ;
+ dc:title "Tuning Frequency Statistics" ;
+ dc:description "Statistics of Estimated Tuning Frequency including mean, standard deviation and histogram" ;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:mean ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:std_dev ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:cardinality 100]) .
+
+dml:mean a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+dml:std_dev a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+# A CollectionLevelSemitone is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the http://vamp-plugins.org/rdf/plugins/silvet#silvet plugin.
+dml:CollectionLevelSemitone rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:intersectionOf (
+ [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelSemitoneInput ;
+ owl:minCardinality 1]
+ [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelSemitoneOutput ;
+ owl:cardinality 1] ) .
+
+dml:collectionLevelSemitoneInput rdfs:subPropertyOf dml:input ;
+ rdfs:range silvetplugbase:silvet_output_notes .
+
+dml:collectionLevelSemitoneOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:SemitoneHistogram .
+
+# A Semitone Histogram is defined as:
+dml:SemitoneHistogram a vamp:DenseOutput ;
+ vamp:identifier "semitonehistogram" ;
+ dc:title "Semitone Histogram" ;
+ dc:description "Histogram of estimated semitones" ;
+ vamp:fixed_bin_count "false" ;
+ vamp:unit "" ;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:sample_count ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:minCardinality 1] ) .
+
+# A CollectionLevelTonicNormSemitone is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be pairs of outputs from the http://vamp-plugins.org/rdf/plugins/silvet#silvet
+# and qmplugbase:qm-keydetector_output_tonic plugins.
+dml:CollectionLevelTonicNormSemitone rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:intersectionOf (
+ [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTonicNormSemitoneInput ;
+ owl:minCardinality 1]
+ [ a owl:Restriction ;
+ owl:onProperty dml:collectionLevelTonicNormSemitoneOutput ;
+ owl:cardinality 1] ) .
+
+dml:collectionLevelTonicNormSemitoneInput rdfs:subPropertyOf dml:input ;
+ rdfs:range dml:InputSet .
+
+dml:InputSet a owl:Class;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:silvetInputSetItem ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:tonicInputSetItem ;
+ owl:cardinality 1] ) .
+
+dml:silvetInputSetItem rdfs:range silvetplugbase:silvet_output_notes .
+
+dml:tonicInputSetItem rdfs:range qmplugbase:qm-keydetector_output_tonic .
+
+dml:collectionLevelTonicNormSemitoneOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:SemitoneHistogram .
diff -r 000000000000 -r e34cf1b6fe09 pyspark/sonic-annotator-notimeside/sonic_annotator_vamp.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/sonic-annotator-notimeside/sonic_annotator_vamp.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,121 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+"""
+Created on Fri Oct 11 13:22:37 2013
+
+@author: thomas
+"""
+
+from __future__ import division
+
+import sys
+import shutil
+import os
+import errno
+import subprocess
+import time
+import random
+import hashlib
+
+# uses a separate console process to achieve the file conversion
+def vamp_host_process(argslist):
+ #"""Call sonic annotator"""
+
+ vamp_host = 'sonic-annotator'
+ command = [vamp_host]
+ command.extend(argslist)
+
+ # which sa version?
+ #p = subprocess.Popen([vamp_host, '-v'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ #print(p.stdout.read())
+
+
+ #stdout = subprocess.check_output(command, stderr=subprocess.STDOUT)
+ time.sleep(random.random()*1.0)
+ p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ text = p.stdout.read()
+ retcode = p.wait()
+ if (retcode==0):
+ print "Finished."
+ return 1
+ else:
+ print "Error " + text
+ return 0
+
+ #time.sleep(random.random()*1.0)
+ #print command
+ #subprocess.call(command,shell=True)
+ return 1
+
+
+# processes the given file using a vamp plugin and sonic annotator
+# @param string wav_file file to be processed
+def transform(wav_file = 'sweep.mp3',
+ transform_file = 'bbc_speechmusic.n3'):
+
+ # get transform hash
+ BLOCKSIZE = 65536
+ hasher = hashlib.sha1()
+ with open(transform_file, 'rb') as afile:
+ buf = afile.read(BLOCKSIZE)
+ while len(buf) > 0:
+ hasher.update(buf)
+ buf = afile.read(BLOCKSIZE)
+ tpath = os.path.split(transform_file)
+
+ # prepare output directory:
+ # get audio file subpath
+ # create directory _Features for output if doesnt exist
+ spath = os.path.split(wav_file)
+ hash = str(hasher.hexdigest())
+ featpath = spath[0] + '/_Analysis/' + tpath[1] + "_" + hash[:5]
+
+ #if not os.path.exists(featpath):
+ # os.makedirs(featpath)
+ print 'Creating directory' + featpath
+ try:
+ os.makedirs(featpath)
+ except OSError as exception:
+ if exception.errno!= errno.EEXIST:
+ raise
+
+ # copy transform file into directory
+ shutil.copy(transform_file, featpath + '/' + tpath[1][:-3] + '_' + hash[:5] + '.n3')
+
+ #./sonic-annotator -t silvet_settings.n3 input.wav -w csv
+ # prepare arguments
+ args = ['-t', transform_file, wav_file, '-w', 'csv', '-w', 'rdf',
+ '--rdf-basedir',featpath,'--csv-basedir',featpath, '--rdf-many-files', '--rdf-append']
+ #args = ['-t', transform_file, wav_file, '-w', 'csv', '--csv-force','--csv-basedir',featpath]
+
+
+ print "Analysing " + wav_file
+
+ result = vamp_host_process(args)
+ # execute vamp host
+ return [wav_file, result]
+
+
+# entry function only for testing
+if __name__ == "__main__":
+ if len(sys.argv) >= 2:
+ transform(sys.argv[1])
+ else:
+ transform()
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/sonic-annotator-notimeside/test_sonic_annotator_notimeside.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/sonic-annotator-notimeside/test_sonic_annotator_notimeside.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,76 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/local/spark-1.0.0-bin-hadoop2/bin/spark-submit
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+__date__ ="$11-Jul-2014 15:31:01$"
+
+from pyspark import SparkConf, SparkContext
+import sys
+import os
+from sonic_annotator_vamp import *
+
+# this is the main routine to be submmitted as a spark job
+#
+#
+# Running python applications through ./bin/pyspark is deprecated as of Spark 1.0.
+# Use ./bin/spark-submit --py-files sonic_annotator_vamp.py
+# you can also provide a zip of all necessary python files
+#
+# @param string audiopath root of the folder structure to be traversed
+# @param string transform_file path to the .n3 turtle file describing the transform
+#def main(audiopath = '/home/wolffd/Documents/python/dml/TimeSide/tests/samples/',
+# transform_file = '/home/wolffd/Documents/python/dml/pyspark/sonic-annotator-notimeside/silvet_settings.n3',
+# masterip = '10.2.165.101'):
+def main(audiopath = '/CHARM-Collection',
+ transform_file = 'bbc_speechmusic.n3',
+ masterip = '0.0.0.0'):
+ print "PySpark Telemeta and Vamp Test"
+
+ # configure spark, cave: local profile uses just 1 core
+ conf = (SparkConf()
+ #.setMaster("local")
+ .setMaster("spark://" + masterip + ":7077")
+ .setAppName("Sonic Annotating")
+ .set("spark.executor.memory", "40g")
+ .set("spark.cores.max", "35"));
+ sc = SparkContext(conf = conf)
+
+ # here traverse the file structure
+ data = []
+ for (dirpath, dirnames, filenames) in os.walk(audiopath):
+ for file in filenames:
+ if file.endswith(".wav") or file.endswith(".mp3") or file.endswith(".flac"):
+ data.append(os.path.join(dirpath, file))
+ njobs = len(data)
+ donejobs = sc.accumulator(0)
+ print "Total: " + str(njobs) + " files"
+
+ # define distributed dataset
+ distData = sc.parallelize(data)
+
+ # define map
+ m1 = distData.map(lambda x: transform(wav_file=x,transform_file=transform_file))
+
+ # reduce (just do the maps ;) )
+ result = m1.collect()
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 3:
+ main(sys.argv[1],sys.argv[2])
+ else:
+ main()
diff -r 000000000000 -r e34cf1b6fe09 pyspark/test_timeside_vamp_spark.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/test_timeside_vamp_spark.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,59 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/local/spark-1.0.0-bin-hadoop2/bin/spark-submit
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+__date__ ="$11-Jul-2014 15:31:01$"
+
+from pyspark import SparkConf, SparkContext
+# @todo: timeside has to be packed for multi-pc usage
+from timeside_vamp import *
+from os import walk
+
+# Running python applications through ./bin/pyspark is deprecated as of Spark 1.0.
+# Use ./bin/spark-submit
+
+
+def main():
+ print "PySpark Telemeta and Vamp Test"
+ conf = (SparkConf()
+ .setMaster("local")
+ .setAppName("My app")
+ .set("spark.executor.memory", "1g"))
+ sc = SparkContext(conf = conf)
+
+ # here come the wav file names
+
+ mypath = '../../TimeSide/tests/samples/'
+ data = []
+ for (dirpath, dirnames, filenames) in walk(mypath):
+ for file in filenames:
+ if file.endswith(".wav"):
+ data.append(os.path.join(dirpath, file))
+
+ # define distributed dataset
+ distData = sc.parallelize(data)
+
+ # define map
+ m1 = distData.map(lambda x: transform(wav_file=x))
+
+ #process 2
+ m1.take(2)
+
+if __name__ == "__main__":
+ main()
+
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 pyspark/test_timeside_vamp_spark_charm.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/test_timeside_vamp_spark_charm.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,101 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/local/spark-1.0.0-bin-hadoop2/bin/spark-submit
+# -*- coding: utf-8 -*-
+__author__="wolffd"
+__date__ ="$11-Jul-2014 15:31:01$"
+
+# How to run this?
+
+# to start hdfs: /usr/local/hadoop/sbin/start-dfs.sh
+
+# Running python applications through ./bin/pyspark is deprecated as of Spark 1.0.
+# Use ./bin/spark-submit
+# spark-submit test_timeside_vamp_spark_charm.py --py-files vamp_plugin_dml.py,timeside_vamp.py,decode_to_wav.py
+
+#import pydoop.hdfs as hdfs
+from pyspark import SparkConf, SparkContext
+# @todo: timeside has to be packed for multi-pc usage
+import os.path
+import os
+import sys
+from os import walk
+# NOTE: this is only for debugging purposes, we can
+# now use a regular timeside installation, e.g. installed by
+sys.path.append(os.getcwd() + '/../TimeSide/')
+
+# mappers
+from timeside_vamp import *
+from decode_to_wav import *
+
+def main():
+ print "PySpark Telemeta and Vamp Test on CHARM"
+
+ # configure the Spark Setup
+ conf = (SparkConf()
+ .setMaster("spark://0.0.0.0:7077")
+ #.setMaster("local")
+ .setAppName("CharmVamp")
+ .set("spark.executor.memory", "1g"))
+ sc = SparkContext(conf = conf)
+
+ # SMB Share
+ # mount.cifs //10.2.165.194/mirg /home/wolffd/wansteadshare -o username=dml,password=xxx,domain=ENTERPRISE")
+
+
+ # uses local paths
+ # get list of obkects to process
+ mypath = '/samples/'
+ data = []
+ for (dirpath, dirnames, filenames) in walk(mypath):
+ for file in filenames:
+ if file.endswith(".wav") or file.endswith(".flac"):
+ data.append(os.path.join(dirpath, file))
+
+ data = data[0:2]
+ # HDFS
+ # note: for HDFS we need wrappers for VAMP and gstreamer :/
+ # copy to hdfs (put in different file before)
+ #hdfs.mkdir("test")
+ #hdfs.chmod("test","o+rw")
+ ##this copies the test wavs to hdfs
+ #hdfs.put("samples/","test/")
+ # get hdfs paths
+# data = []
+# filenames = hdfs.ls("hdfs://0.0.0.0:9000/user/hduser/test/samples")
+# print filenames
+# for file in filenames:
+# if file[-4:]== ".wav" or file[-4:]==".flac":
+# data.append(file)
+#
+ # define distributed dataset
+ # todo: can we do this with the wav data itself?
+ distData = sc.parallelize(data)
+
+ # define map that decodes to wav
+ m0 = distData.map(lambda x: decode_to_wav(source=x))
+
+ # define map that applies the vamp plugin
+ m1 = m0.map(lambda x: transform(wav_file=x)).collect()
+ print m1
+ return m1
+ #process 2
+ #m1.take(2)
+
+if __name__ == "__main__":
+ main()
+
diff -r 000000000000 -r e34cf1b6fe09 pyspark/test_transform_n3s/keyhisto_cla_trans.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/test_transform_n3s/keyhisto_cla_trans.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,60 @@
+@prefix rdfs: .
+@prefix owl: .
+@prefix xsd: .
+@prefix rdf: .
+@prefix dml: .
+@prefix vamp: .
+@prefix dc: .
+
+# This file defines an ontology, which also inports the vamp plugin ontology
+ a owl:Ontology;
+ owl:imports .
+
+# Our highest level class is a CollectionLevelAnalysis
+dml:Transform a owl:Class .
+dml:CollectionLevelAnalysis rdfs:subClassOf dml:Transform .
+
+dml:type rdfs:subPropertyOf rdf:type .
+
+vamp:Transform rdfs:subClassOf dml:Transform .
+
+# A CollectionLevelKeyTonic is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the qm keydetector vamp plugin.
+# Its output is an Average Tonic Key.
+dml:CollectionLevelKeyTonic rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ dml:input [ a ] ;
+ dml:output [ a dml:KeyTonicHistogram] .
+
+dml:CollectionLevelKey rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ dml:input [ a ] ;
+ dml:output [ a dml:KeyHistogram] .
+
+# specify domain and range of dml:input here:
+#dml:input ;
+
+# An Average Key is defined as:
+dml:AverageKeyTonic a vamp:SparseOutput ;
+ dml:identifier "average tonic key" ;
+ dc:title "Average Key Tonic" ;
+ dc:description "Average estimated tonic key (from C major = 1 to B major = 12)" ;
+ vamp:unit "" ;
+ dml:sample_count [ a xsd:integer] ;
+ dml:result [ a owl:DatatypeProperty] .
+
+# A (Tonic) Key Histogram is defined as:
+dml:KeyTonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "keytonichistogram" ;
+ dc:title "Key Tonic Histogram" ;
+ dc:description "Histogram of estimated tonic key (from C major = 1 to B major = 12)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 12 ;
+ vamp:bin_names ( "C" "C#" "D" "D#" "E" "F" "F#" "G" "G#" "A" "A#" "B");
+ dml:sample_count [ a xsd:integer] ;
+ dml:result [ a owl:DatatypeProperty] .
+
+# Some example data
+_:cl_key_tonic dml:type dml:CollectionLevelKeyTonic ;
+ dml:input ;
+ dml:input .
diff -r 000000000000 -r e34cf1b6fe09 pyspark/test_transform_n3s/tuningstats_cla_trans.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/test_transform_n3s/tuningstats_cla_trans.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,133 @@
+@prefix rdfs: .
+@prefix owl: .
+@prefix xsd: .
+@prefix rdf: .
+@prefix dml: .
+@prefix vamp: .
+@prefix dc: .
+
+# This file defines an ontology, which also imports the vamp plugin ontology
+ a owl:Ontology;
+ owl:imports .
+
+# Our highest level class is a dml:Transform
+dml:Transform a owl:Class .
+dml:CollectionLevelAnalysis rdfs:subClassOf dml:Transform .
+
+dml:type rdfs:subPropertyOf rdf:type .
+
+vamp:Transform rdfs:subClassOf dml:Transform .
+
+dml:input a owl:ObjectProperty .
+dml:output a owl:ObjectProperty .
+
+# A CollectionLevelKeyTonic is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the qm keydetector vamp plugin.
+dml:CollectionLevelKeyTonic rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelKeyTonicInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelKeyTonicOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelKeyTonicInput rdfs:subPropertyOf dml:input ;
+ rdfs:range .
+
+dml:collectionLevelKeyTonicOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:KeyTonicHistogram .
+
+# A (Tonic) Key Histogram is defined as:
+dml:KeyTonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "keytonichistogram" ;
+ dc:title "Key Tonic Histogram" ;
+ dc:description "Histogram of estimated tonic key (from C major = 1 to B major = 12)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 12 ;
+ vamp:bin_names ( "C" "C#" "D" "D#" "E" "F" "F#" "G" "G#" "A" "A#" "B");
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:sample_count ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:cardinality 12] ) .
+
+dml:bin a owl:ObjectProperty ;
+ rdfs:range dml:Bin .
+
+dml:Bin a owl:Class;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_number ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_value ;
+ owl:cardinality 1] ) .
+
+dml:bin_number a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+dml:bin_value a owl:DatatypeProperty .
+
+dml:sample_count a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+# A Key Histogram is defined as:
+dml:KeyTonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "keyhistogram" ;
+ dc:title "Key Histogram" ;
+ dc:description "Histogram of estimated key (from C major = 1 to B major = 12 and C minor = 13 to B minor = 24)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 24 ;
+ vamp:bin_names ( "Cmaj" "C#maj" "Dmaj" "D#maj" "Emaj" "Fmaj" "F#maj" "Gmaj" "G#maj" "Amaj" "A#maj" "Bmaj" "Cmin" "C#min" "Dmin" "D#min" "Emin" "Fmin" "F#min" "Gmin" "G#min" "Amin" "A#min" "Bmin");
+ dml:sample_count [ a xsd:integer] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:bin ;
+ owl:cardinality 24] .
+
+
+
+# A CollectionLevelTuningFrequencyStatistics is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the silvet transcription plugin.
+dml:CollectionLevelTuningFrequencyStatistics rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelTuningFrequencyStatisticsInput rdfs:subPropertyOf dml:input ;
+ rdfs:range .
+
+dml:collectionLevelTuningFrequencyStatisticsOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:TuningFrequencyStatistics .
+
+# TuningFrequencyStatistics is defined as:
+dml:TuningFrequencyStatistics a vamp:DenseOutput ;
+ vamp:identifier "tuningfrequencystatistics" ;
+ dc:title "Tuning Frequency Statistics" ;
+ dc:description "Statistics of Estimated Tuning Frequency" ;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:mean ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:std_dev ;
+ owl:cardinality 1] ) .
+
+dml:mean a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+dml:std_dev a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+# Some example data
+_:cl_tuning_statistics dml:type dml:CollectionLevelTuningFrequencyStatistics ;
+ dml:input ;
+ dml:input .
diff -r 000000000000 -r e34cf1b6fe09 pyspark/test_transform_n3s/tuningstats_cla_trans_win.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/test_transform_n3s/tuningstats_cla_trans_win.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,139 @@
+@prefix rdfs: .
+@prefix owl: .
+@prefix xsd: .
+@prefix rdf: .
+@prefix dml: .
+@prefix vamp: .
+@prefix dc: .
+
+# This file defines an ontology, which also imports the vamp plugin ontology
+ a owl:Ontology;
+ owl:imports .
+
+# Our highest level class is a dml:Transform
+dml:Transform a owl:Class .
+dml:CollectionLevelAnalysis rdfs:subClassOf dml:Transform .
+
+dml:type rdfs:subPropertyOf rdf:type .
+
+vamp:Transform rdfs:subClassOf dml:Transform .
+
+dml:input a owl:ObjectProperty .
+dml:output a owl:ObjectProperty .
+
+# A CollectionLevelKeyTonic is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the qm keydetector vamp plugin.
+dml:CollectionLevelKeyTonic rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelKeyTonicInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelKeyTonicOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelKeyTonicInput rdfs:subPropertyOf dml:input ;
+ rdfs:range .
+
+dml:collectionLevelKeyTonicOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:KeyTonicHistogram .
+
+# A (Tonic) Key Histogram is defined as:
+dml:KeyTonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "keytonichistogram" ;
+ dc:title "Key Tonic Histogram" ;
+ dc:description "Histogram of estimated tonic key (from C major = 1 to B major = 12)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 12 ;
+ vamp:bin_names ( "C" "C#" "D" "D#" "E" "F" "F#" "G" "G#" "A" "A#" "B");
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:sample_count ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:cardinality 12] ) .
+
+dml:bin a owl:ObjectProperty ;
+ rdfs:range dml:Bin .
+
+dml:Bin a owl:Class;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_number ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin_value ;
+ owl:cardinality 1] ) .
+
+dml:bin_number a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+dml:bin_value a owl:DatatypeProperty .
+
+dml:sample_count a owl:DatatypeProperty ;
+ rdfs:range xsd:integer .
+
+# A Key Histogram is defined as:
+dml:KeyTonicHistogram a vamp:DenseOutput ;
+ vamp:identifier "keyhistogram" ;
+ dc:title "Key Histogram" ;
+ dc:description "Histogram of estimated key (from C major = 1 to B major = 12 and C minor = 13 to B minor = 24)." ;
+ vamp:fixed_bin_count "true" ;
+ vamp:unit "" ;
+ vamp:bin_count 24 ;
+ vamp:bin_names ( "Cmaj" "C#maj" "Dmaj" "D#maj" "Emaj" "Fmaj" "F#maj" "Gmaj" "G#maj" "Amaj" "A#maj" "Bmaj" "Cmin" "C#min" "Dmin" "D#min" "Emin" "Fmin" "F#min" "Gmin" "G#min" "Amin" "A#min" "Bmin");
+ dml:sample_count [ a xsd:integer] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:bin ;
+ owl:cardinality 24] .
+
+
+
+# A CollectionLevelTuningFrequencyStatistics is a CollectionLevelAnalysis,
+# it requires at least one input, and these inputs
+# will all be outputs from the silvet transcription plugin.
+dml:CollectionLevelTuningFrequencyStatistics rdfs:subClassOf dml:CollectionLevelAnalysis ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsInput ;
+ owl:minCardinality 1] ;
+ owl:equivalentClass [ a owl:restriction ;
+ owl:onProperty dml:collectionLevelTuningFrequencyStatisticsOutput ;
+ owl:cardinality 1] .
+
+dml:collectionLevelTuningFrequencyStatisticsInput rdfs:subPropertyOf dml:input ;
+ rdfs:range .
+
+dml:collectionLevelTuningFrequencyStatisticsOutput rdfs:subPropertyOf dml:output ;
+ rdfs:range dml:TuningFrequencyStatistics .
+
+# TuningFrequencyStatistics is defined as:
+dml:TuningFrequencyStatistics a vamp:DenseOutput ;
+ vamp:identifier "tuningfrequencystatistics" ;
+ dc:title "Tuning Frequency Statistics" ;
+ dc:description "Statistics of Estimated Tuning Frequency including mean, standard deviation and histogram" ;
+ owl:intersectionOf (
+ [ a owl:Restriction;
+ owl:onProperty dml:mean ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:std_dev ;
+ owl:cardinality 1]
+ [ a owl:Restriction;
+ owl:onProperty dml:bin ;
+ owl:cardinality 100]) .
+
+dml:mean a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+dml:std_dev a owl:DatatypeProperty ;
+ rdfs:range xsd:float .
+
+# Some example data
+_:cl_tuning_statistics dml:type dml:CollectionLevelTuningFrequencyStatistics ;
+# dml:input ;
+# dml:input ;
+ dml:input ;
+# dml:input ;
+ dml:input .
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 pyspark/timeside_vamp.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/timeside_vamp.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,91 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+"""
+Created on Fri Oct 11 13:22:37 2013
+
+@author: thomas
+"""
+
+from __future__ import division
+
+import matplotlib.pyplot as plt
+import numpy as np
+import sys
+import os
+
+# NOTE: this is only for debugging purposes, we can
+# now use a regular timeside and sonic annotator installation,
+#sudo ln -s ../../sonic_annotator/sonic-annotator-0.7-linux-amd64/sonic-annotator /usr/sbin/
+#sys.path.append(os.getcwd() + '../../sonic_annotator/sonic-annotator-0.7-linux-amd64/')
+
+import timeside
+from timeside.analyzer.core import AnalyzerResult, AnalyzerResultContainer
+from timeside import __version__
+from vamp_plugin_dml import *
+
+def transform(wav_file = 'sweep.wav'):
+
+
+ # normal
+ d = timeside.decoder.file.FileDecoder(wav_file)
+
+ # Get available Vamp plugins list
+ from timeside.analyzer.vamp_plugin import VampSimpleHost
+ plugins_list = VampSimpleHostDML.get_plugins_list()
+
+ # Display avalaible plugins
+ print 'index \t soname \t \t identifier \t output '
+ print '------ \t \t ---------- \t ------ '
+ for index, line in zip(xrange(len(plugins_list)),plugins_list):
+ print '%d : %s \t %s \t %s' % (index,line[0],line[1],line[2])
+
+ # Let's choose #7
+ my_plugin = plugins_list[0]#11
+ print my_plugin
+
+ #
+ # Vamp plugin Analyzer
+ #vamp = timeside.analyzer.vamp_plugin.VampSimpleHostDML([my_plugin])
+ vamp = VampSimpleHostDML([my_plugin])
+ #vamp = timeside.analyzer.VampSimpleHostDML()
+
+ myPipe = (d | vamp ).run()
+
+ # Get the vamp plugin result and plot it
+ for key in vamp.results.keys():
+ print vamp.results[key].data
+ res_vamp = vamp.results[key]
+
+
+ # test storage as HDF5
+ #vamp.results.to_hdf5(wav_file + '.h5')
+ #res_hdf5 = vamp.results.from_hdf5(wav_file + '.h5')
+ #print '%15s' % 'from hdf5:',
+ #print res_hdf5
+
+ return res_vamp
+
+ # res_vamp = vamp.results['vamp_simple_host.percussiononsets.detectionfunction']
+
+if __name__ == "__main__":
+ if len(sys.argv) >= 2:
+ transform(sys.argv[1])
+ else:
+ transform()
+
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 pyspark/transforms/semitoneHistogram.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/transforms/semitoneHistogram.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,148 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+
+from rdflib import Namespace, BNode, RDF, Literal
+from n3Parser import get_rdf_graph_from_n3
+from csvParser import get_dict_from_csv
+
+dml_ns = Namespace("http://dml.org/dml/cla#")
+semitone_labels = ("C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B")
+semitone_label_codes = dict()
+for octave in range(0, 11):
+
+ semitone_idx = 1
+
+ for semitone_label in semitone_labels:
+
+ semitone_label_with_octave = semitone_label + str(octave)
+ semitone_label_codes[semitone_label_with_octave] = semitone_idx
+ semitone_idx += 1
+
+
+# normalisation per clip ?
+perfilenorm = 1
+
+# Add triples representing a 'pitch histogram' result to
+# an RDF graph
+def add_semitone_histogram_to_graph(semitone_histogram, output_rdf_graph, transform, sample_count, input_f_files):
+
+ output_bnode = BNode()
+ output_rdf_graph.add((transform, dml_ns.output, output_bnode))
+ for input_f_file in input_f_files:
+ output_rdf_graph.add((transform, dml_ns.input, input_f_file))
+ output_rdf_graph.add((output_bnode, RDF.type, dml_ns.SemitoneHistogram))
+ output_rdf_graph.add((output_bnode, dml_ns.sample_count, Literal(sample_count)))
+
+ for semitone in semitone_histogram:
+
+ bin_bnode = BNode()
+ output_rdf_graph.add((output_bnode, dml_ns.bin, bin_bnode))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_number, Literal(semitone)))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_value, Literal(semitone_histogram.get(semitone))))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_name, Literal(semitone_labels[semitone - 1])))
+
+ return output_rdf_graph
+
+# Parse an input_f_file n3 file, and generate
+# a semitone histogram
+def find_semitone_histogram(input_f_file, perfilenorm):
+
+ piece_semitone_hist = dict()
+
+ for x in range(1, 13):
+
+ piece_semitone_hist[x] = 0
+
+ piece_duration = 0
+
+ if input_f_file.endswith('.csv'):
+
+ csv_dict = get_dict_from_csv(input_f_file, columtype = ['time','duration','pitch','velocity','label'])
+
+ for row in csv_dict:
+
+ duration = float(row['duration'])
+ piece_semitone_hist[semitone_label_codes[row['label']]] += duration
+ piece_duration += duration
+
+ else:
+
+ f_file_graph = get_rdf_graph_from_n3(input_f_file)
+
+ qres = f_file_graph.query(
+ """prefix dml:
+ prefix tl:
+ prefix af:
+ SELECT ?event ?pitch ?duration
+ WHERE {
+ ?event a af:Note .
+ ?event event:time ?event_time .
+ ?event_time tl:duration ?duration .
+ ?event rdfs:label ?pitch .
+ }""")
+
+ for row in qres:
+
+ # parse xsd:duration type
+ tl_duration_str_len = len(row.duration)
+ tl_duration = float(row.duration[2:tl_duration_str_len-1])
+
+ piece_semitone_hist[semitone_label_codes[row.pitch.__str__()]] += tl_duration
+ piece_duration += tl_duration
+
+ # normalise if necessary
+ if perfilenorm:
+
+ for x in range(1, 13):
+
+ piece_semitone_hist[x] /= piece_duration
+
+ return piece_semitone_hist
+
+# Parse the input_f_files n3 files, and generate
+# a collection-level semitone histogram
+def find_cla_semitone_histogram(input_f_files):
+
+ num_f_files = len(input_f_files)
+ semitone_hist = dict()
+
+ for x in range(1, 13):
+
+ semitone_hist[x] = 0
+
+ for input_f_file in input_f_files:
+
+ piece_semitone_hist = find_semitone_histogram(input_f_file, perfilenorm)
+
+ for x in range(1, 13):
+
+ semitone_hist[x] += piece_semitone_hist[x]
+
+ # normalise the collection histogram by duration
+ hist_total = 0
+
+ for semitone_bin in semitone_hist:
+
+ hist_total += semitone_hist[semitone_bin]
+
+ for semitone_bin in semitone_hist:
+
+ semitone_hist[semitone_bin] /= hist_total
+
+ return (semitone_hist, num_f_files)
diff -r 000000000000 -r e34cf1b6fe09 pyspark/transforms/tonicHistogram.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/transforms/tonicHistogram.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,156 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+
+from rdflib import Graph, Namespace, BNode, RDF, Literal
+from n3Parser import get_rdf_graph_from_n3
+from csvParser import get_dict_from_csv, get_array_from_csv
+
+dml_ns = Namespace("http://dml.org/dml/cla#")
+
+# Add triples representing a 'tonic histogram' result to
+# an RDF graph
+def add_tonic_histogram_to_graph(tonic_histogram, output_rdf_graph, transform, sample_count, input_f_files):
+
+ output_bnode = BNode()
+ output_rdf_graph.add((transform, dml_ns.output, output_bnode))
+ for input_f_file in input_f_files:
+ output_rdf_graph.add((transform, dml_ns.input, input_f_file))
+ output_rdf_graph.add((output_bnode, RDF.type, dml_ns.TonicHistogram))
+ output_rdf_graph.add((output_bnode, dml_ns.sample_count, Literal(sample_count)))
+
+ for tonic in tonic_histogram:
+
+ bin_bnode = BNode()
+ output_rdf_graph.add((output_bnode, dml_ns.bin, bin_bnode))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_number, Literal(tonic)))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_value, Literal(tonic_histogram.get(tonic))))
+
+ return output_rdf_graph
+
+# Parse the input_f_files n3 files, and generate
+# a tonic histogram
+def find_cla_tonic_histogram(input_f_files):
+
+ num_f_files = len(input_f_files)
+ tonic_hist = dict()
+
+ for x in range(1,13):
+
+ tonic_hist[x] = 0
+
+ for input_f_file in input_f_files:
+
+# tonic = find_last_key_in_piece(input_f_file)
+ tonic = find_most_common_key_in_piece(input_f_file)
+ tonic_hist[tonic] = tonic_hist.get(tonic) + 1
+
+ return (tonic_hist, num_f_files)
+
+def find_most_common_key_in_piece(input_f_file):
+
+ tonic_hist = find_tonic_histogram(input_f_file)
+ duration_of_tonic = max(tonic_hist.values())
+ result = -1
+
+ for tonic in tonic_hist:
+
+ if tonic_hist[tonic] == duration_of_tonic:
+ result = tonic
+
+ return result
+
+# Parse the input_f_files n3 file, and generate
+# a tonic histogram
+def find_tonic_histogram(input_f_file):
+
+ tonic_hist = dict()
+
+ for x in range(1,13):
+
+ tonic_hist[x] = 0
+
+ if input_f_file.endswith('.csv'):
+
+ # ['time','keynr','label']
+ csv_array = get_array_from_csv(input_f_file)
+
+ for idx in range(1, len(csv_array)):
+
+ tonic_duration = csv_array[idx][0] - csv_array[idx - 1][0]
+ tonic = int(csv_array[idx - 1][1])
+ tonic_hist[tonic] = tonic_hist.get(tonic) + tonic_duration
+
+ else:
+
+ # TODO - n3 version of tonic histogram
+ # for now use last key in piece
+ tonic = find_last_key_in_piece(input_f_file)
+ tonic_hist[tonic] = tonic_hist.get(tonic) + 1
+
+ return (tonic_hist)
+
+# Determine the last (temporally) key in the
+# input_f_file n3 file
+def find_last_key_in_piece(input_f_file):
+
+ max_time = 0
+ last_key = 0
+
+ if input_f_file.endswith('.csv'):
+
+ csv_dict = get_dict_from_csv(input_f_file, columtype = ['time','keynr','label'])
+
+ for row in csv_dict:
+
+ tl_time = float(row['time'])
+
+ if tl_time > max_time:
+
+ max_time = tl_time
+ last_key = row['keynr']
+
+
+ else:
+
+ key_feature_graph = get_rdf_graph_from_n3(input_f_file)
+
+ qres = key_feature_graph.query(
+ """prefix dml:
+ prefix event:
+ prefix tl:
+ prefix af:
+ SELECT ?event ?key ?tl_time
+ WHERE {
+ ?event event:time ?event_time .
+ ?event_time tl:at ?tl_time .
+ ?event af:feature ?key .
+ }""")
+
+ for row in qres:
+
+ tl_time_str_len = len(row.tl_time)
+ tl_time = float(row.tl_time[2:tl_time_str_len-1])
+
+ if tl_time > max_time:
+
+ max_time = tl_time
+ last_key = row.key
+
+
+ return int(last_key)
diff -r 000000000000 -r e34cf1b6fe09 pyspark/transforms/tonicNormSemitoneHistogram.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/transforms/tonicNormSemitoneHistogram.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,126 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# -*- coding: utf-8 -*-
+__author__="hargreavess"
+
+import rdflib
+from rdflib import Namespace, BNode, RDF, Literal
+from n3Parser import get_rdf_graph_from_n3
+from semitoneHistogram import find_semitone_histogram, semitone_labels
+from tonicHistogram import find_last_key_in_piece, find_most_common_key_in_piece
+
+dml_ns = Namespace("http://dml.org/dml/cla#")
+perfilenorm = 1
+
+# normalisation per clip ?
+perfilenorm = 1
+
+# Add triples representing a "pitch histogram" result to
+# an RDF graph
+def add_tonic_norm_semitone_histogram_to_graph(semitone_histogram, output_rdf_graph, transform, sample_count, input_f_files, input_rdf_graph):
+
+ query = rdflib.plugins.sparql.prepareQuery(
+ """SELECT ?silvet_input ?tonic_input
+ WHERE {
+ ?tonicNormSemitoneInput dml:silvetInputSetItem ?silvet_input .
+ ?tonicNormSemitoneInput dml:tonicInputSetItem ?tonic_input .
+ }""", initNs = { "dml": dml_ns })
+
+ output_bnode = BNode()
+ output_rdf_graph.add((transform, dml_ns.output, output_bnode))
+
+ for transform_input in input_f_files:
+
+ output_rdf_graph.add((transform, dml_ns.input, transform_input))
+ qres = input_rdf_graph.query(query, initBindings={'tonicNormSemitoneInput': transform_input})
+
+ for row in qres:
+
+ output_rdf_graph.add((transform_input, dml_ns.silvetInputSetItem, row.silvet_input))
+ output_rdf_graph.add((transform_input, dml_ns.tonicInputSetItem, row.tonic_input))
+
+ output_rdf_graph.add((output_bnode, RDF.type, dml_ns.SemitoneHistogram))
+ output_rdf_graph.add((output_bnode, dml_ns.sample_count, Literal(sample_count)))
+
+ for semitone in semitone_histogram:
+
+ bin_bnode = BNode()
+ output_rdf_graph.add((output_bnode, dml_ns.bin, bin_bnode))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_number, Literal(semitone)))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_value, Literal(semitone_histogram.get(semitone))))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_name, Literal(semitone_labels[semitone - 1])))
+
+ return output_rdf_graph
+
+# Parse the transform_inputs (sets of n3 files), and generate
+# a tonic-normalised semitone histogram
+def find_cla_tonic_norm_semitone_histogram(transform_inputs, input_rdf_graph):
+
+ sample_count = len(transform_inputs)
+ semitone_hist = dict()
+
+ for x in range(1, 13):
+
+ semitone_hist[x] = 0
+
+ query = rdflib.plugins.sparql.prepareQuery(
+ """SELECT ?silvet_input ?tonic_input
+ WHERE {
+ ?tonicNormSemitoneInput dml:silvetInputSetItem ?silvet_input .
+ ?tonicNormSemitoneInput dml:tonicInputSetItem ?tonic_input .
+ }""", initNs = { "dml": dml_ns })
+
+ for transform_input in transform_inputs:
+
+ qres = input_rdf_graph.query(query, initBindings={'tonicNormSemitoneInput': transform_input})
+
+ piece_semitone_hist = []
+
+ for row in qres:
+
+ piece_semitone_hist = find_semitone_histogram(row.silvet_input, perfilenorm)
+# piece_tonic = find_last_key_in_piece(row.tonic_input)
+ piece_tonic = find_most_common_key_in_piece(row.tonic_input)
+ piece_semitone_hist = normalise_semitone_hist_by_tonic(piece_semitone_hist, piece_tonic)
+
+ for x in range(1, 13):
+
+ semitone_hist[x] += piece_semitone_hist[x]
+
+ # normalise the collection histogram by duration
+ hist_total = 0
+
+ for semitone_bin in semitone_hist:
+
+ hist_total += semitone_hist[semitone_bin]
+
+ for semitone_bin in semitone_hist:
+
+ semitone_hist[semitone_bin] /= hist_total
+
+ return (semitone_hist, sample_count)
+
+def normalise_semitone_hist_by_tonic(piece_semitone_hist, piece_tonic):
+
+ tonic_norm_semitone_hist = dict()
+
+ for semitone_bin in piece_semitone_hist:
+
+ shifted_bin = ((semitone_bin - piece_tonic) % 12) + 1
+ tonic_norm_semitone_hist[shifted_bin] = piece_semitone_hist[semitone_bin]
+
+ return tonic_norm_semitone_hist
diff -r 000000000000 -r e34cf1b6fe09 pyspark/transforms/tuningFrequencyStatistics.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/transforms/tuningFrequencyStatistics.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,280 @@
+# Part of DML (Digital Music Laboratory)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# -*- coding: utf-8 -*-
+__author__="Daniel Wolff, hargreaves"
+
+# this script derives standard statistics for tuning frequency,
+# in particular:
+# average
+# standard deviation
+
+from rdflib import Graph, Namespace, BNode, RDF, Literal
+import codecs
+import warnings
+import numpy
+import csv
+from n3Parser import get_rdf_graph_from_n3, uri2path
+# from csvParser import get_dict_from_csv, get_array_from_csv
+
+# statistics per clip ?
+perfilestats = 1
+
+# dml namespace
+dml_ns = Namespace("http://dml.org/dml/cla#")
+
+# Add triples representing a 'key histogram' result to
+# an RDF graph
+def add_tf_statistics_to_graph(statistics, output_rdf_graph, transform, sample_count, input_f_files):
+
+ # add base
+ output_bnode = BNode()
+ output_rdf_graph.add((transform, dml_ns.output, output_bnode))
+ for input_f_file in input_f_files:
+ output_rdf_graph.add((transform, dml_ns.input, input_f_file))
+ output_rdf_graph.add((output_bnode, RDF.type, dml_ns.TuningFrequencyStatistics))
+ output_rdf_graph.add((output_bnode, dml_ns.sample_count, Literal(sample_count)))
+
+ # add mean and std
+ output_rdf_graph.add((output_bnode, dml_ns.mean, Literal(statistics["mean"])))
+ output_rdf_graph.add((output_bnode, dml_ns.std_dev, Literal(statistics["std-dev"])))
+
+ # add histogram
+ for i in range(0,len(statistics["histogram"]["count"])):
+
+ bin_bnode = BNode()
+ output_rdf_graph.add((output_bnode, dml_ns.bin, bin_bnode))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_number, Literal(i+1)))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_value, Literal(statistics["histogram"]["count"][i])))
+ output_rdf_graph.add((bin_bnode, dml_ns.bin_name, Literal(statistics["histogram"]["index"][i])))
+
+ return output_rdf_graph
+
+# Parse the input_f_files n3 files, and generate
+# a key histogram
+def find_cla_tf_statistics(input_f_files):
+
+
+ sample_count = len(input_f_files)
+
+ all_data = []
+ perfile_freq = []
+ perfile_hist = []
+ hist_index =[]
+ for input_f_file in input_f_files:
+
+ # get all data from feature file
+ data = file_to_table(input_f_file)
+
+ # filter those rows which have an A
+ # returns duration, frequency
+ data = filter_norm_A(data)
+
+ if perfilestats:
+ # get frequency and duration columns
+ freq = string2numpy(data,2)
+ dur = string2numpy(data,1)
+ # get mean values per clip now,
+ # then statistics over clips later
+ avg, std = numstats(freq, weights = dur)
+ hist = histogram(freq, nbins = 100, lb=390, ub=490, weights = dur)
+
+ # remember statistics
+ perfile_freq.append(avg)
+ perfile_hist.append(hist["count"])
+
+ # remember histogram index
+ if len(hist_index) == 0:
+ hist_index = hist["index"]
+
+ else:
+ # this version just adds everything per collection,
+ # recordings are not treated as seperate entities
+ all_data.extend(data)
+
+
+ if perfilestats:
+ avg, std = histostats(numpy.array(perfile_freq,dtype=float))
+ hist_avg, hist_std = histostats(numpy.array(perfile_hist,dtype=float))
+
+ else:
+ # get frequency and duration columns
+ freq = string2numpy(all_data,2)
+ dur = string2numpy(all_data,1)
+
+ # get basic statistics
+ avg, std = numstats(freq, weights = dur)
+
+ # get histogram weighted by duration
+ hist = histogram(freq, nbins = 100, lb=390, ub=490, weights = dur)
+
+ return {"mean": avg, "std-dev": std, "histogram": hist}, sample_count#(key_hist, num_f_files)
+
+# convert one column, specified by datapos, to numpy
+def string2numpy(data,datapos):
+
+ edata = []
+ for row in data:
+ edata.append(row[datapos])
+
+ colu = numpy.array(edata,dtype=float)
+ return colu
+
+#calculates the histogram
+# nbins: number of bins
+# lb: lower bound
+# ub: upper bound
+def histogram(colu, nbins = 100, lb=-1, ub=-1, weights = []):
+
+ # lower bounds defined?
+ if lb == -1 or ub == -1:
+ lb = colu.min()
+ ub = colu.max()
+
+ # get histogram
+ count,index = numpy.histogram(colu,bins=nbins,range = [lb, ub],weights = weights)
+ count = count.tolist()
+ index = index.tolist()
+
+ # normalise for clip
+ count = count / numpy.max(count)
+
+ # return histogram
+ return {"count":count, "index":index}
+
+
+# calculates unweighted statistics for the histograms
+def histostats(counts):
+ avg = numpy.average(counts, axis = 0).tolist()
+
+ #weighted standard deviation
+ std = numpy.std(counts, axis =0)
+
+ #med = numpy.median(colu, weights = weights).tolist()
+ # could use https://pypi.python.org/pypi/wquantiles for weighted median
+
+ return (avg,std)
+
+#calculates weighted statistics for numerical input
+def numstats(colu, weights = []):
+
+ # we want to always use the last dimension
+ # get average
+ avg = numpy.average(colu, axis = 0 ,weights = weights)
+
+ #weighted standard deviation
+ std = numpy.sqrt(numpy.average((colu-avg)**2, axis = 0, weights=weights))
+ #std = numpy.std(colu, weights = weights).tolist()
+
+ #med = numpy.median(colu, weights = weights).tolist()
+ # could use https://pypi.python.org/pypi/wquantiles for weighted median
+
+ return (avg,std)
+
+
+# only returns data columns which refer to the note A
+# the frequencies are folded up / down to A4
+# returns time, duration, frequency
+def filter_norm_A(data):
+ Adata = []
+ for row in data:
+ # we assume format time , duration , pitch, ingeger_pitch, label
+ if 'A3' in row[4]:
+ Adata.append(row[:2] + [2*row[2]])
+ elif 'A4' in row[4]:
+ Adata.append(row[:3])
+ elif 'A5' in row[4]:
+ Adata.append(row[:2] + [0.5*row[2]])
+
+ return Adata
+
+
+# Read named features into table of format
+# time, feature[0], feature[1} ...
+def file_to_table(input_f_file):
+ if input_f_file.endswith('.n3'):
+ data = n3_to_table(input_f_file)
+ elif input_f_file.endswith('.csv'):
+ data = csv_to_table(input_f_file)
+ #data = get_array_from_csv(input_f_file)
+ #data = get_dict_from_csv(input_f_file,columtype = ['time','duration','pitch','velocity','label'])
+ return data
+
+
+# Read named features into table of format
+# time, feature[0], feature[1} ...
+def n3_to_table(input_f_file):
+
+ # read feature file
+ feature_graph = get_rdf_graph_from_n3(input_f_file)
+
+ # we construct a generic search string that gets all
+ # necessary features
+
+ q = """prefix dml:
+ SELECT ?event ?tl_time ?tl_duration ?feature ?label
+ WHERE {
+ ?event event:time ?event_time .
+ ?event_time tl:beginsAt ?tl_time .
+ ?event_time tl:duration ?tl_duration .
+ ?event rdfs:label ?label .
+ ?event af:feature ?feature .
+ }"""
+
+ # query parsed file
+ qres = feature_graph.query(q)
+ data = []
+ for row in qres:
+ # parse time
+ tl_time_str_len = len(row.tl_time)
+ tl_time = float(row.tl_time[2:tl_time_str_len-1])
+
+ # parse duration
+ tl_dur_str_len = len(row.tl_duration)
+ tl_duration = row.tl_duration[2:tl_dur_str_len-1]
+ # parse feature
+ data.append([tl_time, tl_duration] + [float(i) for i in row.feature.split(' ') ] + [row.label])
+
+ #data = numpy.array(data, dtype=float)
+ # print data
+ # we assume format time , duration , pitch, velocity, label
+ return data #int(last_key)
+
+# todo: do the same conversion for csv, should allow to use the same script with csv
+def csv_to_table(input_f_file):
+
+
+ output = []
+ badcount = 0
+
+ # keep track of column names
+ ncols = 0
+ with open(uri2path(input_f_file), 'rb') as csvfile:
+ contents = csv.reader(csvfile, delimiter=',', quotechar='"')
+ for row in contents:
+ if ncols == 0:
+ ncols = len(row)
+
+ if len(row) >= ncols:
+ # we assume format time , duration , pitch, velocity, label
+ output.append([float(row[0]), float(row[1]), float(row[2])] + row[3:])
+ else:
+ badcount += 1
+
+ if badcount > 0:
+ warnings.warn("Incomplete csv file, ignoring " + str(badcount) + " entries")
+
+ return output
\ No newline at end of file
diff -r 000000000000 -r e34cf1b6fe09 pyspark/vamp_plugin_dml.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/pyspark/vamp_plugin_dml.py Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,189 @@
+# -*- coding: utf-8 -*-
+#
+# Copyright (c) 2013 Paul Brossier
+
+# This file is part of TimeSide.
+
+# TimeSide is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 2 of the License, or
+# (at your option) any later version.
+
+# TimeSide is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with TimeSide. If not, see .
+
+# Author: Paul Brossier
+
+from timeside.core import implements, interfacedoc
+from timeside.analyzer.core import Analyzer
+from timeside.api import IAnalyzer
+import re
+import subprocess
+import numpy as np
+
+
+def simple_host_process(argslist):
+ """Call vamp-simple-host"""
+
+ vamp_host = 'vamp-simple-host'
+ command = [vamp_host]
+ command.extend(argslist)
+ # try ?
+ stdout = subprocess.check_output(
+ command, stderr=subprocess.STDOUT).splitlines()
+
+ return stdout
+
+
+# Raise an exception if Vamp Host is missing
+from timeside.exceptions import VampImportError
+try:
+ simple_host_process(['-v'])
+ WITH_VAMP = True
+except OSError:
+ WITH_VAMP = False
+ raise VampImportError
+
+
+class VampSimpleHostDML(Analyzer):
+
+ """Vamp plugins library interface analyzer"""
+
+ implements(IAnalyzer)
+
+ def __init__(self, plugin_list=None):
+ super(VampSimpleHostDML, self).__init__()
+ if plugin_list is None:
+ plugin_list = self.get_plugins_list()
+ #plugin_list = [['vamp-example-plugins', 'percussiononsets', 'detectionfunction']]
+
+ self.plugin_list = plugin_list
+
+ @interfacedoc
+ def setup(self, channels=None, samplerate=None,
+ blocksize=None, totalframes=None):
+ super(VampSimpleHostDML, self).setup(
+ channels, samplerate, blocksize, totalframes)
+
+ @staticmethod
+ @interfacedoc
+ def id():
+ return "vamp_simple_host_dml"
+
+ @staticmethod
+ @interfacedoc
+ def name():
+ return "Vamp Plugins Host for DML"
+
+ @staticmethod
+ @interfacedoc
+ def unit():
+ return ""
+
+ def process(self, frames, eod=False):
+ pass
+ return frames, eod
+
+ def post_process(self):
+ #plugin = 'vamp-example-plugins:amplitudefollower:amplitude'
+
+ wavfile = self.mediainfo()['uri'].split('file://')[-1]
+
+ for plugin_line in self.plugin_list:
+
+ plugin = ':'.join(plugin_line)
+ (time, duration, value) = self.vamp_plugin(plugin, wavfile)
+ if value is None:
+ return
+
+ if duration is not None:
+ plugin_res = self.new_result(
+ data_mode='value', time_mode='segment')
+ plugin_res.data_object.duration = duration
+ else:
+ plugin_res = self.new_result(
+ data_mode='value', time_mode='event')
+
+ plugin_res.data_object.time = time
+ plugin_res.data_object.value = value
+
+# Fix strat, duration issues if audio is a segment
+# if self.mediainfo()['is_segment']:
+# start_index = np.floor(self.mediainfo()['start'] *
+# self.result_samplerate /
+# self.result_stepsize)
+#
+# stop_index = np.ceil((self.mediainfo()['start'] +
+# self.mediainfo()['duration']) *
+# self.result_samplerate /
+# self.result_stepsize)
+#
+# fixed_start = (start_index * self.result_stepsize /
+# self.result_samplerate)
+# fixed_duration = ((stop_index - start_index) * self.result_stepsize /
+# self.result_samplerate)
+#
+# plugin_res.audio_metadata.start = fixed_start
+# plugin_res.audio_metadata.duration = fixed_duration
+#
+# value = value[start_index:stop_index + 1]
+ plugin_res.id_metadata.id += '.' + '.'.join(plugin_line[1:])
+ plugin_res.id_metadata.name += ' ' + \
+ ' '.join(plugin_line[1:])
+
+ self.process_pipe.results.add(plugin_res)
+
+ @staticmethod
+ def vamp_plugin(plugin, wavfile):
+
+ args = [plugin, wavfile]
+
+ stdout = simple_host_process(args) # run vamp-simple-host
+
+ stderr = stdout[0:8] # stderr containing file and process information
+ res = stdout[8:] # stdout containg the feature data
+
+ if len(res) == 0:
+ return ([], [], [])
+
+ # Parse stderr to get blocksize and stepsize
+ blocksize_info = stderr[4]
+
+ import re
+ # Match agianst pattern 'Using block size = %d, step size = %d'
+ m = re.match(
+ 'Using block size = (\d+), step size = (\d+)', blocksize_info)
+
+ blocksize = int(m.groups()[0])
+ stepsize = int(m.groups()[1])
+ # Get the results
+
+ # how are types defined in this? what types can timeside use?
+ # value = np.asfarray([line.split(': ')[1] for line in res if (len(line.split(': ')) > 1)])
+ value = [line.split(': ')[1] for line in res if (len(line.split(': ')) > 1)]
+ value = [re.sub("[^0-9\.\s]","",x) for x in value]
+ value = np.asfarray([[x.split(' ')[0] for x in value], [x.split(' ')[1] for x in value]]) # only get the first thing in the string
+ time = np.asfarray([r.split(':')[0].split(',')[0] for r in res])
+
+ time_len = len(res[0].split(':')[0].split(','))
+ if time_len == 1:
+ # event
+ duration = None
+ elif time_len == 2:
+ # segment
+ duration = np.asfarray(
+ [r.split(':')[0].split(',')[1] for r in res])
+
+ return (time, duration, value)
+
+ @staticmethod
+ def get_plugins_list():
+ arg = ['--list-outputs']
+ stdout = simple_host_process(arg)
+
+ return [line.split(':')[1:] for line in stdout]
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/bbc_speechmusic.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/bbc_speechmusic.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,29 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "change_threshold" ] ;
+ vamp:value "0.0781"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "decision_threshold" ] ;
+ vamp:value "0.2734"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "margin" ] ;
+ vamp:value "14"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "min_music_length" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "resolution" ] ;
+ vamp:value "256"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/beatroot_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/beatroot_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,26 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "441"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "expiryTime" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxChange" ] ;
+ vamp:value "0.2"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "postMarginFactor" ] ;
+ vamp:value "0.3"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "preMarginFactor" ] ;
+ vamp:value "0.15"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/chordino_chordnotes.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/chordino_chordnotes.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,37 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "boostn" ] ;
+ vamp:value "0.1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "rollon" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "s" ] ;
+ vamp:value "0.7"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuningmode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useHMM" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useNNLS" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whitening" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/chordino_simple.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/chordino_simple.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,37 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "boostn" ] ;
+ vamp:value "0.1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "rollon" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "s" ] ;
+ vamp:value "0.7"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuningmode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useHMM" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useNNLS" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whitening" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/marysas_beat_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/marysas_beat_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,41 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform_plugin a vamp:Plugin ;
+ vamp:identifier "marsyas_ibt" .
+
+:transform_library a vamp:PluginLibrary ;
+ vamp:identifier "mvamp-ibt" ;
+ vamp:available_plugin :transform_plugin .
+
+:transform a vamp:Transform ;
+ vamp:plugin :transform_plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "indtime" ] ;
+ vamp:value "5"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "induction" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxbpm" ] ;
+ vamp:value "250"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "metrical_changes" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minbpm" ] ;
+ vamp:value "50"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "online" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output [ vamp:identifier "beat_times" ] .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/melodia_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/melodia_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,27 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "128"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:program """Polyphonic""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxfqr" ] ;
+ vamp:value "1760"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minfqr" ] ;
+ vamp:value "55"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minpeaksalience" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "voicing" ] ;
+ vamp:value "0.2"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/nnls-bothchroma_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/nnls-bothchroma_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,34 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:plugin_version """3""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "chromanormalize" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "rollon" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "s" ] ;
+ vamp:value "0.7"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuningmode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useNNLS" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whitening" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/nnls-logfreqspec_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/nnls-logfreqspec_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,34 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:plugin_version """3""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "chromanormalize" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "rollon" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "s" ] ;
+ vamp:value "0.7"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuningmode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "useNNLS" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whitening" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/powerspectrum_2048_1024.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/powerspectrum_2048_1024.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,10 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/powerspectrum_2048_512.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/powerspectrum_2048_512.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,10 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/powerspectrum_4096_512.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/powerspectrum_4096_512.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,10 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "4096"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-barbeattracker_beatcounts.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-barbeattracker_beatcounts.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,14 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """2""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "bpb" ] ;
+ vamp:value "4"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-chromagram-means_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-chromagram-means_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,30 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:plugin_version """4""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "bpo" ] ;
+ vamp:value "12"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxpitch" ] ;
+ vamp:value "96"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minpitch" ] ;
+ vamp:value "36"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "normalization" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuning" ] ;
+ vamp:value "440"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-chromagram_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-chromagram_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,30 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:plugin_version """4""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "bpo" ] ;
+ vamp:value "12"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxpitch" ] ;
+ vamp:value "96"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minpitch" ] ;
+ vamp:value "36"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "normalization" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuning" ] ;
+ vamp:value "440"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-constantq_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-constantq_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,30 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "2048"^^xsd:int ;
+ vamp:block_size "16384"^^xsd:int ;
+ vamp:plugin_version """3""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "bpo" ] ;
+ vamp:value "12"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxpitch" ] ;
+ vamp:value "84"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minpitch" ] ;
+ vamp:value "36"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "normalized" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuning" ] ;
+ vamp:value "440"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-mfcc-means_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-mfcc-means_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "logpower" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "nceps" ] ;
+ vamp:value "20"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "wantc0" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-mfcc-standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-mfcc-standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "logpower" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "nceps" ] ;
+ vamp:value "20"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "wantc0" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-onsets-detectionfn_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-onsets-detectionfn_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """3""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "dftype" ] ;
+ vamp:value "3"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "sensitivity" ] ;
+ vamp:value "50"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whiten" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-onsets_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-onsets_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """3""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "dftype" ] ;
+ vamp:value "3"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "sensitivity" ] ;
+ vamp:value "50"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whiten" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm-segmentation_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm-segmentation_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "8820"^^xsd:int ;
+ vamp:block_size "26460"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "featureType" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "nSegmentTypes" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "neighbourhoodLimit" ] ;
+ vamp:value "4"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm_vamp_key_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm_vamp_key_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,17 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "32768"^^xsd:int ;
+ vamp:block_size "32768"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "length" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuning" ] ;
+ vamp:value "440"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/qm_vamp_key_standard_tonic.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/qm_vamp_key_standard_tonic.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,17 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "32768"^^xsd:int ;
+ vamp:block_size "32768"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "length" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "tuning" ] ;
+ vamp:value "440"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_fast.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_fast.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_fast_finetune.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_fast_finetune.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_fast_finetune_allinstruments.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_fast_finetune_allinstruments.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_fast_nonfinetune_allinstruments.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_fast_nonfinetune_allinstruments.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_slow_finetune.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_slow_finetune.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_slow_finetune_allinstruments.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_slow_finetune_allinstruments.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_slow_finetune_piano.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_slow_finetune_piano.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_slow_nonfinetune_allinstruments.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_slow_nonfinetune_allinstruments.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/silvet_settings_slow_nonfinetune_piano.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/silvet_settings_slow_nonfinetune_piano.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,21 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "finetune" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "instrument" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "mode" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/tempogram_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/tempogram_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,46 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "1024"^^xsd:int ;
+ vamp:block_size "2048"^^xsd:int ;
+ vamp:plugin_version """1""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "C" ] ;
+ vamp:value "1000"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "log2FftLength" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "log2HopSize" ] ;
+ vamp:value "6"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "log2TN" ] ;
+ vamp:value "10"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "maxBPM" ] ;
+ vamp:value "480"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minBPM" ] ;
+ vamp:value "30"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "minDB" ] ;
+ vamp:value "-74"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "octDiv" ] ;
+ vamp:value "30"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "refBPM" ] ;
+ vamp:value "60"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/tempotracker_beats_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/tempotracker_beats_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """5""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "dftype" ] ;
+ vamp:value "3"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "method" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whiten" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .
diff -r 000000000000 -r e34cf1b6fe09 sonic_annotator/vamp_plugins/tempotracker_tempo_standard.n3
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sonic_annotator/vamp_plugins/tempotracker_tempo_standard.n3 Sat Feb 20 18:14:24 2016 +0100
@@ -0,0 +1,22 @@
+@prefix xsd: .
+@prefix vamp: .
+@prefix : <#> .
+
+:transform a vamp:Transform ;
+ vamp:plugin ;
+ vamp:step_size "512"^^xsd:int ;
+ vamp:block_size "1024"^^xsd:int ;
+ vamp:plugin_version """5""" ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "dftype" ] ;
+ vamp:value "3"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "method" ] ;
+ vamp:value "1"^^xsd:float ;
+ ] ;
+ vamp:parameter_binding [
+ vamp:parameter [ vamp:identifier "whiten" ] ;
+ vamp:value "0"^^xsd:float ;
+ ] ;
+ vamp:output .