On 5-6 May I attended the annual Open Educational Resources (OER) conference in the UK OER17 in London. It was a great chance to catch up with that part of the vibrant open education community. It was also a good chance to walk around the Bloomsbury area of London where I used to work a few years ago on open education projects with the UAL. On the first day I presented on the general applications of web annotation technology to support open education – which are considerable in my opinion, ending the presentation with a description of how Clipper could support open education. The talk was entitled ‘You Me Them and Everybody’ a famous line from a 1960’s song that featured in the The Blues Brothers Film. There seemed to be a lot of interest – which was very encouraging.
We met up at the OU Monday to Wednesday this week for a development session we have got a lot done and advanced quite a bit – largely due to the work of our Spanish colleague David Roldan Alvarez. We have been doing quite a bit of redesign in the run up to launching a live service. We now have these services that Clipper will work with:
YouTube – done
Soundcloud – waiting on api permissions
Facebook – done
Vimeo – done
Podbean – done with user info
BBC radio – done with user info
M3U8 format – working
Dropbox – done – works with user guidance (remove the zero and add 1 to the end of the URL
That’s quite a list! It takes Clipper into really interesting territory as we go forwards. We are working on Microsoft OneDrive and Box as well as working over intranets and with DAM systems. We have a bit more to do before we update the current version. M3U8 is a big step its a format that used by a lot of museums and archives and presents a ‘barrier’ between a video stream URL and users for security. What this means is a big deal for us – if Clipper is whitelisted with an archive service it means Clipper users can then access the archive and create Clips and annotations and share them over the open / social web but the video stays where it is. For some archives this could be a game changer – by allowing more interactive user engagement and spreading this into the world of social media this could drive a lot of traffic to the archive web site and, importantly, generate masses of rich analytics – which in turn can justify the costs of the archive and even lead to new revenue streams.
And Facebook…that’s a big deal as it is a major video platform these days, this opens up some interesting possibilities in education and community learning.
In this ‘longform’ blog post we consider some parallel development opportunities from our work on the Clipper Toolkit, based on what we have learnt in Phase 3 and reflecting on previous involvement in open education projects. Here we examine the feasibility of creating an authoring toolkit to make it easier to produce web-based open learning materials. That this is still a challenge may be surprising, given how long e-learning etc. has been around and has been promising to change our education systems.
Here, we examine some fundamental practical problems and discuss possible solutions. Along the way we consider the opportunities for changing the way we currently work in education to deliver true open education opportunities to students. In the process, we discuss how this is all closely linked to the key issues of Inclusion, Accessibility, Quality, Efficiency and Change.
An open education digital learning resource toolkit that:
Creates web native content that is accessible and works well with assistive technologies, is usable on smartphones and can be used offline
Takes content out of the VLE / LMS – link to it instead. This makes things much, much easier in relation to information management, quality assurance and maintenance etc. The VLE / LMS is then used for discussion, collaboration, grading and feedback etc.
Makes content available in different formats e.g.:
Online web microsites for directly linking to from a VLE etc.
Downloadable zipped web microsites for offline use
EPUB etc. formats for offline access via ebook readers etc.
PDF for offline use and printing
Enables collaborative workflows to jointly author content
Has a storage / DAM (Digital Asset Management) component of the system, linked to a ‘social layer’, where users can share and discuss their work and collaborate with each other (either publicly or privately) – here is a link to an online interactive early prototype designed for art colleges
Improves information management and quality control for both individuals and institutions
Generates rich analytics
Supports the sharing of content with different ‘onion skin’ levels of access; private / named individuals / groups / institutions / open to the web with a CC licence
Outline System Diagram
Working Title:open4ed (open publishing engine foreducation)
Skills Money and Time
In past projects we have used the MIT OCW site as a good example of making standard course materials openly available with a Creative Commons licence. It’s a great inspiring project, but we know that it takes a lot of resources to produce the content and present it that way. What we would like are tools and services to make it easier for users in education and eslewhere to author their own web-native ‘CourseWare’, not necessarily for open distribution but also for use in their own ‘internal’ college courses. Unfortunately, this is beyond the reach of many due to not having the necessary skills or time to use specialist (often expensive) software to create web content.
There is a big need for this. In our colleges, online learning resources tend to be loaded into the Virtual Learning Environment (VLE), such as Moodle, by lecturers in the form of Word documents and PowerPoint slides etc. The trouble with this is that is unusable for students who do not possess a computer with the proprietary software needed to view it and whose main means of accessing the Internet is via a smartphone. Even those students who do possess the necessary kit must go through all sorts of hassle to download / view the content. It would be much better if the content was ‘web-native’ from the start and everyone could view it. We are not the only ones struggling with this, the publishing industry is still trying to make the transition from paper to digital with many of their designers having to use tortuous workflows and expensive software to convert their existing work to online ‘webified’ versions.
This situation poses some fundamental practical problems for colleges who want to provide blended learning solutions for their students. Recent research (Offcom) is indicating that 30%+ of the UK population now rely on a smartphone as their main means of accessing the internet – often without a domestic broadband connection. This is certainly confirmed by recent learner analytics data from City of Glasgow College. This means that colleges are effectively locking out a large and growing demographic – especially those in workplace and in community learning contexts.
There is another serious problem with this situation and that is the accessibility of the content in the VLE for students with disabilities. Much of this content is not designed or formatted in such a way to work well with assistive devices. Whereas, if the content was truly web-based from the start the chances of better accessibility are dramatically increased.
Efficiency and Quality
Another issue is related to the way we currently use our VLEs, people tend to project old patterns of working onto using new technology. This results in the VLE being used as a personal digital library by lecturers to collect and deliver their own learning content (copied and altered) into multiple courses. This presents severe problems in information management, quality assurance and efficiency both for individual lecturers and from an institutional point of view. From a student perspective, access to these course learning resources can be highly variable in terms of content, consistency and quality as they cross individual lecturer’s personal VLE ‘silos’.
User Requirements: Learners – Teachers – Institutions
Based on what we have been discussing and through working with lecturers and students in vocational education institutions here is a short list.
Students need digital learning resources that are:
Usable on a smartphone
Accessible with assistive technologies
Downloadable for use offline
Printable as required
Convertible into eBooks /PDF etc. as needed
Lecturers and Instructors need tools and systems that can:
Enable the easy creation of learning resources that are web native from the start, usable on smartphones, and work with assistive technologies
Support different workflows:
Sole author doing everything
Subject expert authors with restricted rights, working with editors and designers to create high quality content
Take the content out of the VLE / LMS to facilitate sharing and collaboration by providing the means for granular access and sharing rights to review and co-create content (using an ‘onion skin’ metaphor for sharing) with:
Just me (private)
Open to the WWW (e.g. OERs, publicity materials etc.)
Provide the means of easy attribution and affiliation (for professional reputation and institutional benefit)
Colleges / Learning Providers need tools and systems that can:
Support the workflows needed to make blended learning economically viable
Enable the effective management and sharing of learning resources – outside the VLE / LMS
Offer the ability to link learning resources to the VLE / LMS
Gather useful analytical data
Provide learning resources that are accessible to disabled learners
Deliver learning resources that are usable on smartphones
Use file formats that enable long term access and reuse of content
Attach different licenses and employ simple digital rights management (DRM) methods
Share learning content with partner institutions
Publish learning resources on the open web – i.e. enable public access
Outline System Description
Inspired by the MIT OpenCourseWare initiative and innovative teaching teams in FE & HE
Uses a non-SQL database as the back-end with data documents encoded in json-ld / bson
Uses CSS / JS to create different styles of web content – articles, slideshows, magazines etc.
Content is saved as
JSON-LD documents for live editing
Mini website (HTML) online with a persistent URL (for linking to) in a web directory for serving and archiving
Downloadable zip file with self-contained website for offline viewing (and editing)
EPUB / PDF for offline viewing on e-readers and printing
The Storage / DAM component of the system is linked to a ‘social layer’ (see the schematic diagram above), where users can share and discuss their work and collaborate with each other – here is a link to an early protoype
You can register or log in with email@example.com and a password of clipper20. This produces mini websites that contain ‘virtual’ clip collections. Providing a starting basis for the content creation / conversion tool.
Storage / Sharing solutions
This can be a variety of options from an in-house repository / Digital Asset Management (DAM) solutions to external cloud services such as NFS (Network File Systems) such as those supplied by Amazon and others. A simple subject / course taxonomy is needed for browsing (MIT provides a good example) together with full text searching. The fact that the learning resources created by the content creation tool would be ‘well-structured documents’ would help with searching functions.
Based on previous (painful!) experience of working with older academic repository projects, keeping this part of the system as simple as possible is sensible. Content would be stored in simple web directories (apart from JSON-LD document which would be in the non-SQL database) this would be stored together with an XML/JSON metadata ‘ticket’ describing the content in the directory. In addition, the content metadata would be indexed in a database to facilitate searching. This way the system content would be both machine and human readable in the long term and facilitate resilience.
What this proposal is not
What this proposal is not – this is not a SCORM / Learning Object Editor (e.g. Articulate, Xerte, Adapt etc.). There are plenty of existing tools that do these things with varying degrees of success. In the college sector, only a small amount of learning resources are in these formats, often needing specialized staff to create, operate and support them. This proposal seeks to tackle the problems for students posed by most existing content in college VLEs as described in this post.
Why Not Use a CMS?
Content Management Systems (CMS’s) have been around a long time and some do a great job such as WordPress and Drupal that are popular and widely used, requiring varying levels of technical expertise to install, maintain and use. If they could meet the kind of needs described in this post our college VLEs / LMSs would already be full of web-based content that was accessible and usable on a smartphone. The reason CMSs do not provide a solution lies in their fundamental design, they are intended to make it easier to create and manage web content within their own framework. To varying degrees this means the content is trappedinside the CMS application (distributed inside its own file system) and can be difficult for users to export.
Extracting the content from the CMS as a self-contained web site is not something it was ever designed to do – the job of the CMS is to assemble the content and ‘play’ it in a web browser – the CMS provides the platform to present the content to the web. So, a fundamental difference is that our system is intended to create web content (micro-sites) outside of the authoring tool in order to be portable to be placed where we want. We also want our web content to be editable by other tools outside of our own content editor, by creating self-contained microsites with HTML as our native file format this becomes possible. This approach also lends itself to supporting the creation of different formats from the same content such as EPUB, PDF etc. for use offline and printing on paper. This is basically the same philosophy being used by O’Reilly Publishing to support its in-house print / electronic publishing operations – so we feel we are on the right track here.
Using a CMS to provide the ‘Social Layer’
However, we do see a role for CMSs in our proposed system. Not for content creation, but to provide a ‘social layer’ on top of the system, where users can share and discuss the learning resources that are in the system (with private and public modes) – particularly useful for those resources that are open to the web. A good starting model for this would be based on the system used by graphic designers to showcase their work – it’s called Behance. Although this is now based on a proprietary Adobe system, it would be possible to create something similar based on the Drupal CMS – in fact a few years back we worked on a prototype for this with a leading Drupal social media company in London Bright Lemon.
Why has this not been done / proposed before?
That, as they say, is a good question, it might sound strange in an ed-tech setting but we think the reasons are more sociological than technical. By that we mean that people and institutions can become conservative in their use of technologies (even tech people) and can be very loyal to technologies, platforms and commercial providers for a whole host of reasons. You can see this dynamic play out every day when proposing a change to an institutional IT Department. In the UK, this has led to the VLE / LMS becoming embedded as the central feature of providing blended learning activity with vendors adding new features and expanding functionality all the time to keep their customers ‘hooked’. In this situation, it can be very difficult to change, or even imagine a change.
Not a Repository?
Previously we have worked with academic repository systems designed fordigital research papers trying to wrangle them into something useful for learning and teaching. There were numerous problems with this – from sweeping assumptions that they would naturally just work for learning resources, lack of viable developer communities in terms of numbers (important for scale), open code that could only be interpreted / used by the original authors, poor interface / UX, and an obsession with entering metadata. Below is a very short video presentation reflecting on these earlier experiences and the beginning of the ideas presented in this post;
The project phase would be organised and managed by a consortium, with the outputs (code and learning resources) being openly licensed to achieve maximum impact.
Service and Sustainability
In the service phase, with wider take up the service would remain free to college users and be paid for at source by existing national services as part of the consortium. The design of the system itself would ensure low service running costs – with little call being made on computing resources by the client side authoring tool components. The service would be operated by a sub-set of the original project consortium. The project and service would also act as an ongoing demonstrator of what is possible using technology to enable and drive the wider fundamental cultural changes that are needed in the sector (see the Change section below). To be clear, these changes involve supporting the collaborative workflows between academics and institutions that are needed to develop an economically sustainable model to create, manage, and share the digital learning resources needed to effectively deliver open / flexible / blended learning by the college sector.
With the system code being open source and using popular OS components it will be entirely possible for institutions to take and adopt the system for their own use at a pace determined by their appetite for new technologies and approaches. We would envisage that both a national service would continue together with institutional adoption – a twin track approach. The service providers would be able to generate income from consultancy activities assisting organisations adopt the toolkit – in relation to integration and training. There is also the potential for this system to be attractive to users in the wider public and private sectors, for instance in relation to training and corporate communications. This in turn could provide revenue for the service providers. This is, of course, an optimistic view; much would depend on actual adoption by a core of users from the original project consortium to encourage others and effective dissemination and promotion.
In traditional F2F education, lecturers and instructors tend to learn their ‘craft’ on the job and in the process the learning materials they create for their students records their own developing teaching and subject knowledge and acts as a touchstone to guide their own practice – a kind of professional ‘life support pack’. So, not surprisingly, they can become deeply attached to these resources
Unfortunately, these traditional F2F methods of individual learning material creation do not scale to support blended / open / flexible learning and can obstruct the new work flows that are needed. Our current use of VLEs/ LMSs tends to reflect a traditional ‘silo’ model of teaching and obstructs the move to team teaching that is needed to make new forms of digital learning economical.
It is not surprising that this tension between traditional teaching modes and new blended modes exists. They each require a different organisation of academic work – or a division of labour if you like. Colleges (and Universities) currently struggle to accommodate these different approaches as most of their practice is still in F2F mode and this is reinforced by things like, timetables, assessment modes, funding models, employment contracts and even student expectations. These ‘systemic’ factors are often overlooked in the research literature and downplayed in the commercial hype that sometimes dominates discourse about e-learning.
To understand the scale of this tension it helps to look at the economic models and workflows used by existing distance providers such as the Open University. In this model the learning resources are designed to take some of the pedagogic load that would normally be supplied by F2F contact. Here, there is a greater up-front investment in course design in relation to planning learner activities and paths and accompanying learning resource design. Typically, in this model a course has to run 5-7 times before it recoups its costs. This is at the other extreme of the spectrum from traditional campus-based education and operates in a radically different way. In this scenario all learning resources are jointly designed and managed centrally – this represents a key cultural move from ‘my course and my resources’ to ‘our course and our resources’
Moving towards an effective and economically sustainable blended / flexible learning model requires a move along the pedagogic spectrum towards distance learning practice – especially in relation to course and learning resource design and their related collaborative workflows. Diana Laurillard argues that there is a need rethink teaching in the 21st century as a design intensive profession in order to make the best use of technology and meet the challenges we face in society. These proposals are intended to play their part in supporting this transition in teaching practice by addressing some of the very practical issues involved.
Students and Learners
So far we have been discussing supplying students and learners with better learning resources and empowering academics and institutions to do this. This is important for the reasons set out here. However, what we are proposing is creating a general-purpose web content authoring tool that can support collaborative workflows, managing and sharing content and integration with the world of social media. Making this system available to students opens up a host of opportunities for education, peer-to-peer learning and the possibility of creating a truly independent portfolio / journal system that users can take on and offline at will and is totally under their own control.
As we come to the end of the Clipper project we have been thinking about the future and the thorny question of sustainability. We have decide to continue our work and are forming a small limited company to act as legal and commercial ‘vehicle’ to carry the work forwards.
In the short term we are concentrating on getting the current round of development completed and the code out the door to the community. We will then seek to promote take up – via adopting and using the code or subscribing to a service we are planning to launch later this year. We have also identified some other funding opportunities to continue with this work and develop related products using the same technical infrastructure. More on this subject soon
I attended a very busy and interesting meeting of the Jisc RDN (Research Data Network) and gave a presentation about our work in the Clipper project. Much of the attendees were involved with the Jisc shared service pilots in this area. The event was held in the historic Corpus Christi college and the main plenaries were held in the Mcrum Lecture Theatre – up a side alley from the famous Eagle pub (where I had a very fine pint of Greene King IPA – after work). You never know what may turn up at these events and it pays to keep an open mind about possible connections, this was one of those days when sparks seemed to fly from different ideas.
The day began with a really interesting and though provoking keynote from Danny Kingsley – the Head of Scholarly Communications at Cambridge. During this she mentioned the challenges presented by time based data such as audio and video (Clipper I thought!). But Danny also mentioned the growing field of data citation and the challenges this presented. This created Spark No.1 – I though to myself – well Clipper is actually a form of data citation – specialising in time based data (citing parts of a web data resource via a URI and making some comments about it in context).
But the more I thought about this as I sat in the lecture theatre I started to scribble some notes. Clipper is also a web annotation tool that is using emerging W3C standards in this area so that standard provides a nice potential for a vehicle to create and transport data citations more generally. This then got me thinking about the work we have been doing with the Roslin Institute at Edinburgh University in the project (see the draft ‘Clipper Snapshot Case Studies‘ document) where we discussed linking Clipper annotations to the DataCite DOIs ‘minted’ by Roslin for their data that linked to the time based media files we were annotating. The DOIs provide the provenance of the data we are ‘clipping’ and annotating, it seemed to make a lot of sense then in the Clipper project and perhaps now in the wider field of general data citation. After all, the content of a W3C web annotation can carry any information we like so it should be able to accommodate all disciplines and emerging data citation formats?
I was musing about this at the lunch break when I briefly bumped into Neil Jefferies (Head of Innovation at the Bodleian Library Oxford) who I knew from the Jisc Data Spring Programme. I was explaining these ideas to him when he added the idea of using the ORCID standard into the mix to identify researchers and link them to their data – so that was Spark No.2. It’s an attractive idea – use existing standards (DOI, ORCID) with the soon to be standard W3C Web Annotation data model as a means of creation and transport for data citation. One of the advantages of this is that the citations themselves would be easily shared on the web and so accessible by search engines and analytics services.
Perhaps at some point it would be useful to do some pilot work in this area…
Some images from the Cambridge event are below and here is the slidshare version of our workshop
Addendum: Neil got back in touch and suggested I look at the subject of ‘nano pubs’ – at first , I have to confess I thought of micro breweries! But a search showed up this link
On Monday the 5th September at the Brighton DRHA conference we are going to be presenting a workshop and forum about our new working prototype of the Clipper toolkit. Technical information about participating in the workshop appears below. This our first outing of the new system, which has been completely reworked from the ground up in Angular2, MongoDB (using JSON LD), using a NodeJS server. This has been a big undertaking for us, but is now beginning to bring big benefits and opportunities.
Launch Clipper (NB use Chrome or Firefox for this test version)
To launch the toolkit click on this link – http://18.104.22.168:8080 – into your web browser address bar and hit return to load the site.
We have created a series of test accounts that you can use with user names ranging from clipper1@clippertube to clipper30@clippertube each with a password of the same name – e.g. clipper1@clippertube has the password of clipper1.You can also register to create an account of your own. Please note that as this is a test system any data you create will not persist in the long term. In the final production version of the system your data will persist and you will be able to download a copy to keep (in different formats).
Clipper Workshop: Bring Your Own URLs
URL – Page (MP4 / MP3) – Online Test Resources – with their URls
This page contains some online audio and video resources for you to use to as source URLs to create clips in Clipper
This demonstrates the Clipper editor working directly with online audio and video files
Copy a URL of your choice (just clicking on it will open it in your browser player if you want to preview it)
Return to the clipper editor and paste the copied URL into the field at the top of the editor window.
You can now play the resource and create and save clips using your chosen resource
If you have the URL for your own resources, you can try using them with the same method (MP4 / MP3 only)
Trevor and myself attended the IAnnotate web annotation conference in Berlin this week, having been kindly alerted to it by colleagues at EUSCREEN. Having previously encountered the image annotation standard IIIF with colleagues from Digirati in the UK. Previous experience with standards had made us a little wary as sometimes standards work can lose contact with practical everyday experience and become an expensive end in their own right, consuming vast resources but leading nowhere, – my own experience with educational interoperability standards confirms that :-).
So, we were beware of getting entangled in a standards runaway – as it happens some of the other participants had similar reservations about past standards initiatives, including W3C ones. However, our experience of attending the W3C working group briefing on the development of the web annotation standards was like a great of fresh air. One statement in particular stuck in my mind – it went something like
“Look, we don’t care what you do inside your own [web annotation] systems, but when you come to share your data with the outside world it makes sense to do it in a standardised way – so that others can make sense of it and use it”
This was the turning point for me – the little thing that revealed the intent – that and the fact that the proposed standard is admirably practical, light weight and makes useful reuse of other W3C standards such as media fragments. Believe it or not I have seen developers and designers trying to adopt a heavy standard internally in their systems in a slavish and sometimes pedantic manner – leading to what might be most charitably described as ‘sub optimal outcomes’.
So, a great result for us from attendance at the conference – we also get a ready made data model that we can adopt and build on without having to dream up our own that also makes compliance with the emerging W3C web annotation standards easier and more useful.
This week John and Trevor are attending the I Annotate 2016 conference in Berlin here is a link to the PDF of our presentation. The last 4 slides describe the new technical architecture of Clipper. We think it will fit well with the world of annotating the web, we are very much looking forwards to finding out about this area of web development as it fits so well with our plans and we hope our conference colleagues will our work interesting / useful.
On the 26th of September 2015 we held our first community consultation and co-design workshop. Here are the feedback and notes from the event – we shall be using this going forwards.
Some Initial Reflections
Table Feedback and Facilitator notes
Policy Implications for service development
Service Development Implications
Data Management Issues (RDM)
Access Control and Security and Rights
Service Development Implications
Sustainability / Support
Some Initial Reflections
After the data management feedback definitely got to check out the older clipper project docs for standards metadata and data flow etc.
Need an idea of the Clipper document structure (graphics)
Need a graphic of the workflow
Differentiation of Clipper from other tools is needed
Data management of big interest to some. Search function will need to reflect this. Suggest we have a simple search option then an advanced search option?
Suggestion for contextual search – OK but setting would need to be clear to the user – i.e. you need to know where you are searching
Interface issues –go for the biggest stumbling blocks first
We need to be able to accommodate more detailed data management model(s) in the future phase
Annotations are of big interest:
Tags and search tools for annotations, common some used, search by tag, title, description full text, rights, (categories?) relation of annotations to parent clips and cliplists for searching. Granular search options.
Option to name Annotation and Describe its contents and more:
Title /Author(s) / Description /Tags
URI for sharing an annotation
Rights for annotations (use URI refs for the rights licences and terms)
Access permissions for annotations
Embargo on annotations
Rights / Licence statement on Annotations
Ability to have rich formatting in annotation
Annotation version control and track changes?
Annotate specific points in the frame and
Links between the annotations i.e. jumping from one annotation to another (in the same clip / different clips in a cliplist / between different clips in different cliplists
Import slides from powerpoint/
Can we reference the annotations as URIs?
Feedback on annotations (and clips and cliplists?) Have comment on the feedback and stop it becoming a forum! Like Derek at Stirling – limit conversation to an initial annotation by the ‘author’ (nb there could be more than one author) and then one level of comments by other users of the system. In the comments section people could have a ‘conversation’ using the twitter style protocols of @ and perhaps # – like in the BBC blogs etc.
Annotations for screen readers – needs to be accessible
Also suggestion to be able to annotate parts of the screen – either by textual reference (top right) or graphically by use of the Canvas property
Have a pencil icon in an Annotation list view open editing functions (simple) have a WordPress style kitchen sink icon to open the rich text editing functions. Perhaps have the option to have the editing window expand to fill the available space for text intensive work and then toggle back to the smaller default display?
User customisation options for both authors and consumers – to choose the interface style they want to work with and functions
Access Control and Sharing
SSO for security and access levels of permissions
Permissions (General) Ability for admin to act as superusers
To determine the access privileges of users and to determine the parts of the toolkit they have access to (e.g. simple text or full formatting tools etc.)
Restrict the collections they have access to
Restrict how widely they can share (/)
Sharing levels of permissions? To share things – default is project defined and levels start with the owner then named people, then organisational units then whole organisation.
Onion ring metaphor for sharing in the system is useful
Make clipper standout from competitors
Clipper on the local network – simple but effective?
Upload slides to display in synch with a video
Bookmarklet in browsers for adding video to their existing user clipper collections?
Integration with Kaltura? And similar systems – be good to try with Phase 3 scenario have had an enquiry from a University
Need an option to have attribution credits and legal statement at start of each clip
Annotations attribution credits and legal statement
Project attribution credits and legal statement
Option for project / Clip IPR audit statement (could be made mandatory?) where the users authors enters the ownership and rights status of the project / clip (NB this would be good to have as much automatic entry as possible from settings pre-entered by the system, the author and the admin)
Embargo options on revealing projects / clips / annotations (timed settings) to their pre-determined level of sharing in the system.
There are a range of other legal issues to consider beyond copyright (as in any digital media) such as privacy / data protection / confidentiality clauses / know how / patents / etc. would be good to be able to record them somehow
Editing the Project name and description needs to be more intuitive
The active resource / clip needs to be highlighted in the list to the left
Will annotations pop up when you reach them?
If you wanted to annotate a whole video would you have the option to do that as one clip
Table Feedback and Facilitator notes
Policy Implications for service development
Not just platform related issues – we need to fit into Institutional Policy and Practice
A range of access management needs from just me to the whole world and points in between (onion skin metaphor)
Capture and represent the intentions of the authors in respect to copyright, sharing and access and provide means for users to communicate and follow up with authors / creators.
Usage should be / is covered by existing institutional policy – really? How would we know? It’s not my job!…implications?
Should highlight copyright statements from sources included in a very clear way
It is important that there exists a clear and detailed document / metadata/ data model structure (that acts as a foundation for future extensions etc.)
Search options should be deep and full text with filters
Access privileges should be controllable to fine levels of clip and annotation
Service Development Implications
Set embargoes on:
Need access controls and privacy levels down to annotations not just to clips
Need to highlight (to service providers) the importance of permanence in relation to URIs and URLs and permalinks and persistence generally
Could clipper be integrated with Office 365 for deployment in the cloud for institutions and with individual usage with access to CDN style facilities for upload and transcoding and sharing etc.? This might satisfy the needs of many individual researchers?
Using annotations as surrogate access to inaccessible archive resources
Data Management Issues (RDM)
Sensitive Privacy issues affecting storage and access
Retention policy also needs to reflect funding agreements and content
Permissions and access control:
Single sign on can help by enabling levels of access / control
Need access to videos / audio to be set by these controls
Password protections (?)
Encryption of data as an option (?)
Creative Commons – no modifications – would Clipper infringe this?
Ownership & Rights etc.: / Original researchers / Producers / Contractors / Contributors / Subjects etc. / Institutional / Funding Body
Usage of Data controlled by requirements of funders and institutional policy
The different ways the researcher wants to categorise things
Annotation start and end timestamps
Annotation version control
Links between annotations (and other clipper items?)
Annotate specific points in a frame
Pop up and or info symbol to remind you to set the time of the annotation then save
Click into toggled annotation (straight to clip section)
HTML / rich text editor for annotations (and descriptions? Of clips / cliplists / projects)
Annotations should be readable by screen readers
Stages need to be more intuitive – need detail / prompt of where to go next
User needs to get feedback when they have made a change completed an action
Add drag / nudge / slider controls for in and out points
Be able to include transitions
Be able to include slides for titles and rights etc.
Be able to cite clips
Have title screen options for clips and Cliplists have slides for citations?
Feedback? – See the comments discussions above
User feedback – e.g. the video had been added to project resources
User feedback – save clip – confirmations
If one of the clips in the cliplist needs a login the user / consumer should be alerted to this at the start or before (padlock symbol? – with number to represent the number of locked clips)
Transitions between clips – authors should have a choice of options
Simplicity is key
Ability to drag and drop annotations would be great
Have a bookmark / Clipper / button plugin in the browser
Jquery – adding media on a bookmaked page to the Clipper resources / project
Allow user to upload slides and display them next to the video – possible integration with slideshare? Etc.
Should leave transition choices to end-user?
Is this something Final Cut Pro could help, in terms of approach?
Can you reorder the clips?
You might also want to include icons for locally hosted items – so that the playlist creator knows what can or cannot be seen by others (who likely won’t be able to access a file on a local machine or local network).
General comment –Web based service is a real advantage. Not many options, nothing to download, that is important. Capitalise on that… At the moment it looks more complex than it is. Has to not just be simple, but also look simple and user friendly.
I was wondering about nudging start and stop points…with ability to set time manually by typing
I think you will need real usability testing to watch people using the tool, rather than asking them… And that will highlight where there is any misunderstanding. When I chose a video for a collection. How do I do anything creative with those clips… To merge or play all etc…
Maybe you’d edit down elsewhere… Something to do with the content I have.
From a users point of view you need confirmations on screen to highlight things have been created, saved, etc. For creating a clip, start and end, I didn’t get any visual confirmation. Need that to make it clear.
It would be helpful to have maybe a pop up, or information symbol to remind you to cut off the clip. Thinking about the likely users here. Would be useful to have reminders.
That issue of annotations also raises the issue of what the playback experience is. And how annotations etc. are part of that…
Access Control and Security and Rights
Non public videos and clips etc. need to be protected – use Single Sign On (SSO) to set and determine access control / permissions / tracking
Coping with / representing different rights holders (sometimes in the same institution)
What happens if you have a clip on a password protected Vimeo, etc.
But you would want students to be able to login, perhaps
All the moving image content on our site is only licensed for one site… Would this sit on the organisations site? Where is it going?
What if you have a video that specifies only the servers/IPs that can be used – which you can do on Vimeo – how would that work with Vimeo?
Can you display rights information here – they should be available in metadata with video and/or APIs and are really important to indicate that.
US vs. UK copyright, particularly thinking about Fair Use content which might be legally acceptable in the US, but not in the UK.
Related to usage in a way that level of usage, and so that issue would be a great problem to have though!
A slide for the beginning or the end with credits etc. generated in the system would be useful. Would help with rights information.
Delete annotations / clips / cliplists
Show store input URIs
Create Tags / Categories
Searchable annotations would be really useful. And find all the relevant annotations and tags. Things like NVivo do that.
Clipper is potentially useful for annotating v large files –so should work on a local high speed network
Is there a way to separate out audio and video to save to different channels… So that you can strip down to just the audio… Maybe you just want to capture that.
Could be done server side but easiest is to just use the player to hide the video
What about a web resource becomes available… And disappears… Hyperlinks can disappear and that would be a concern when I come to share it… And when I invest that time. And it’s quite likely… If a web link is dead, it’s a problem.
Not about trust, but fragility of web links…
I think that notifications (of broken links – by Clipper) would be useful here. But maybe also something that can be cached or kept so there is a capture of that.
Check out the work of Herbert von Sompel in relation to overcoming broken links in academic communications – the Hiberlink project
Notifications would be really important (to let authors know of a link going down).
You are pulling things through from websites elsewhere. If you make your own interview, can you upload it here? Or do you upload elsewhere and pull in URL?
My question is a bit different… Maybe how the clip is created… There are so many people who share clips and compilations of video items…
The ability to compile / print a cliplist with video stored in the list is a requirement for some people
How do you publish this content? Do you share the playlist? Do you need a Clipper account to view it?
If it’s research data a lot of this will be sensitive, and have to be within your control and your own students…
We do use some cloud-based services for student data though, so there must be some possibility there.
Duration of storage – Some for long-term, some quite short.
Some funders will have requirements too. But we were also talking about non-public video content… Maybe need two systems with permissions lined up… Asking students to sign in twice can be confusing. Institutional single sign on might be useful – map permissions across. But can the system recognise right to access data.
My students have access to very private recordings that have to be secure and have to be retained in that way, and keep it secure.
A question really: if it is someone else’s data and shared under CC licence (ND) – do clipper clips count as modifications or not?
About how that is presented?
Although not the totality of data, it’s usually what supports publications. But open access aspect is certainly important. Clipper could find its way into that kind of environment and could be a good tool to show off some of your research data.
How much do you need to worry about, how much is for institutions to worry about? Like data ownership etc. But you may need to worry about as a platform.
And for access you’d want a lot of granularity of who might access these things, might be a large group or public, or might just be you, or just be a small group.
Having users fill in a field where they can state what they think the copyright is.
A statement of intent / knowledge?
Yes, something that allows you to have a comeback if a collections owner comes back…
Policy implications wise, there aren’t really any cases that shouldn’t already be covered by institutional policies (?). Licenses, derivative works, etc. should already by covered by institutional policies. Maybe some special cases…
Are policies fit for purpose?
It is usually awareness not existence of policies (?)
Possibly a pop up indicating license and appropriate usage, so you know what you can do. Second aspect, if you can legally modify videos – why not do that on desktop system offline, if not then how can this comply. Only the making of copies that this removes the issue for. Sorry for a super defeatist comment but how does this differ from what else is there?
I come at this from two places… Both the way into the lumpy a/v content, interrogate, search it, etc.… And then also this more creative tool where you make something else available on the internet – alarm bells start ringing. For the creative side, why not use iMovie etc.
This is a frequent comment – we want to stay away from ‘printing’ video but a Clipper cliplist could be used to create a an EDL (Edit Decision List) that could be imported into an editing tool and used to ‘print’ a Cliplist?
It’s not a video editing tool, it’s annotation. So clearly not that…
For digital preservation… preserving video is relatively difficult and is an ongoing process. Clips are basically JSON descriptions – easy to preserve.
Check out the Dedoose academic tagging tool
A very good content tool. But I think being very clear on what this thing is for… And making it really good for these things. Really focusing on the annotations and textual aspects more.
Service Development Implications
Embargoes, on metadata, and issues of privacy, access, and license for annotations for the same reasons.
What about bandwidth?
It depends on the video delivery…
It’s not your issue really. It’s for content providers…
The system depends on you having a consistent URI for a playable version of a video… That may be an issue depending on how files are held.
Several in the room indicate they are using them…
Making (some?) annotations (and metadata?) public will help others find your data.
Costs wise it needs to be open source for people to import themselves? And if so, how can you skin it and brand it. And how often does it need maintenance and updates.
Searchable tags etc. – perhaps use of hash tags in annotations
Tagging etc. and putting things into categories
Sustainability / Support
org for support….?
Identify things that could make Clipper stand out fro the ‘competitions’
Keep it simple
Focus on annotations
Is there documentation for the code so far?
API for Clipper? So others can use the annotations etc.
If sensitive data, and videos, then annotations might also want to be private… Rather than being on your server..(raises the possibility of a local version of Clipper – either in the institution or for an individual – integration with MS Azure?)
Or could they get a private instance from you?
We haven’t talked much about searching capabilities.
I think for effective searching you are going to want to have a more complex annotation data structure – so you can do filters, indexing etc. so less computationally taxing and more accurate for users.
Does the system log who has created which annotation? So you can track who does what on a research project.