During this course, you will discover advanced HTML5 techniques to help you develop
+innovative projects and applications. Please try to work on one of the many proposed
+optional projects, to be found at the end of each section. Remember that if you are
+not comfortable with JavaScript, no worries. Just start creating from one of the
+provided examples or follow our
+JavaScript Introduction course. And most of all,
+have fun!
-## [Table of contents](#table-of-contents)
-
-## [About W3C and the Web](#cha)
->### - [About W3C and the Web](#cha-1)
->### - [Why accessibility is important](#cha-2)
->### - [Why internationalization is important](#cha-3)
-
-## [Course information](#chb)
->### * [Welcome to "HTML5.2x - Apps and Games"](#chb-1)
->### * [Course outline, grading and due dates](#chb-2)
->### * [Course practical information](#chb-3)
->### * [Course tools](#chb-4)
-
-## [Module 1: Advanced HTML5 Multimedia](#ch1)
->### 1.1. [Video introduction - Module 1](#ch1-1-1)
->### 1.2. [The Timed Text Track API](#ch1-2-1)
->### 1.3. [Advanced features for audio and video players](ch1-3-1)
->### 1.4. [Creating tracks on the fly, syncing HTML content with a video](#ch1-4-1)
->### 1.5. [The Web Audio API](#ch1-5-1)
-
-## [Module 2: Game programming with HTML5](#ch2)
->### 2.1. [Video introduction - Module 2](#ch2-1-1)
->### 2.2. [Basic concepts of HTML5 game development](#ch2-2-1)
->### 2.3. [A simple game framework: graphics, animations and interactions](#ch2-3-1)
->### 2.4. [Time-based animation](#ch2-4-1)
->### 2.5. [Animating multiple objects, collision detection](#ch2-5-1)
->### 2.6. [Sprite-based animation](#ch2-6-1)
->### 2.7. [Game states](#ch2-7-1)
-
-## [Module 3: HTML5 file upload and download](#ch3)
->### 3.1. [Video introduction](#ch3-1-1)
->### 3.2. [File API and Ajax / XHR2 requests](#ch3-2-1)
->### 3.3. [Drag and drop: the basics](#ch3-3-1)
->### 3.4. [Drag and drop: working with files](#ch3-4-1)
->### 3.5. [Forms and files](#ch3-5-1)
->### 3.6. [IndexedDB](#ch3-6-1)
->### 3.7. [Conclusion on client-side persistence](#ch3-7-1)
-
-## [Module 4: Web components and other HTML5 APIs](#ch4-1-1)
->### 4.2.[Web Components](#ch4-1-1)
->### 4.3.[Web Workers](#ch4-2-1)
->### 4.4.[The Orientation and Device Motion APIs](#ch4-3-1)
->### 4.5.[Where to from here?](#ch4-4-1)
-
-W3Cx-4of5-HTML5.2x – Apps and Games
-HTML5.2x Apps and Games - git
-
-
Course outline
-
-During this course, you will discover advanced HTML5 techniques to help
-you develop innovative projects and applications. Please try to work on
-one of the many proposed optional projects, to be found at the end of
-each section. Remember that if you are not comfortable with JavaScript,
-no worries. Just start creating from one of the provided examples or
-follow our JavaScript
-Introduction course.
-And most of all, have fun!
-
Play with audio and video in "Module 1: Advanced HTML5
- multimedia".
-
Start programming an HTML5-based game of your own in Module 2!
- Learn the core techniques for writing 2D video games that run at 60
- frames/s. Discover basic game development concepts, set up a single
- game framework, understand time-based animation, animate multiple
- objects, use sprite sheets, detect collisions, deal with gamepad
- events, and many more.
-
Module 3 is about drag and drop, upload/download files with
- Ajax/XHR2 and IndexedDB. The Indexed Database API is a recommended
- standard interface for a local database of records holding simple
- values and hierarchical objects. Note that IndexedDB is also useful
- for storing your HTML5 game scores!
-
This is the last set of lectures! Module 4 gives a lot of space
- to Web components as they help build Web pages using ready to use
- standardized building blocks. Web components comprise Custom
- Elements, Shadow DOM and HTML Imports and Templates.
-
These specifications are under continuous development at W3C.
+
Play with audio and video in “Module 1: Advanced HTML5 multimedia”.
+
Start programming an HTML5-based game of your own in Module 2! Learn
+ the core techniques for writing 2D video games that run at 60 frames/s. Discover
+ basic game development concepts, set up a single game framework, understand
+ time-based animation, animate multiple objects, use sprite sheets, detect
+ collisions, deal with gamepad events, and many more.
+
Module 3 is about drag and drop, upload/download files with
+ Ajax/XHR2 and IndexedDB. The Indexed Database API is a recommended standard
+ interface for a local database of records holding simple values and hierarchical
+ objects. Note that IndexedDB is also useful for storing your HTML5 game scores!
+
This is the last set of lectures! Module 4 gives a lot of space to Web
+ components as they help build Web pages using ready to use standardized building
+ blocks. Web components comprise Custom Elements, Shadow DOM and HTML Imports and
+ Templates.
+
These specifications are under continuous development at W3C.
-This module also gives you a flavor of other HTML5 APIs such as the
+
This module also gives you a flavor of other HTML5 APIs such as the
Orientation API which is useful for monitoring and controlling games and
other activities; and Web Workers which introduce the power of parallel
-processing to Web apps.
-
-This HTML5 Apps and Games course is part of the Front-End Web
-Developer" (FEWD) Professional
-Certificate program.
-To get this FEWD certificate, you will need to successfully pass all 5
-courses that compose that program. Find out more
-on w3cx.org!
-
-If you already have a verified certificate in one or more of these
-courses, you do NOT need to re-take that course.
+processing to Web apps.
If you already have a verified certificate in one or more of these courses, you
+do NOT need to re-take that course.
Additional information:
+
-
If you are new to the edX platform, we encourage you to (check
- out DemoX], a quick walk-through of an edX experience. It will help answer basic “how
- to's” on taking an edX course.
If you are new to the edX platform, we encourage you to (check out DemoX], a quick
+ walk-through of an edX experience. It will help answer basic “how to’s” on taking
+ an edX course.
-You will dive into advanced techniques, combining HTML5, CSS and
-JavaScript, to create your own HTML5 app and/or game.
+
You will dive into advanced techniques, combining HTML5, CSS
+and JavaScript, to create your own HTML5 app and/or game.
During this course, you will notably learn:
+
-
Advanced multimedia features with the Track and Audio APIs
-
HTML5 games techniques
-
More APIs, including Web Workers and Service Workers
-
Web components
-
Persistence techniques for data storage including IndexedDB, File
- System API and Drag and Drop
+
Advanced multimedia features with the Track and Audio APIs,
+
HTML5 games techniques,
+
More APIs, including Web Workers and Service Workers,
+
Web components, and
+
Persistence techniques for data storage including IndexedDB, File System API
+ and Drag and Drop.
Web Browsers and Editors
Web browsers
-
-
-
-
-
-Not surprisingly, it would be helpful to have a browser (short for "Web
-Browser") installed so that you can see the end result of your source
-code. Most common browsers are
-[Edge ](https://www.microsoft.com/en-us/edge)(and IE),
-[Firefox](https://www.mozilla.org/en-US/firefox/new/),
-[Chrome](https://www.google.com/chrome/),
-[Safari](http://www.apple.com/safari/),
-etc.
-
-Look for the [history of Web
-browsers](https://en.wikipedia.org/wiki/Web_browser#History) (on Wikipedia). An
-interesting resource is the [market and platform market share]
-(https://www.w3counter.com/globalstats.php) (updated regularly).
+
+
+
+
Not surprisingly, it would be helpful to have a browser (short for
+“Web Browser”) installed so that you can see the end result of your source code. Most
+common browsers are Edge (and IE), Firefox, Chrome, Safari, etc.
-While any text editor, like NotePad or TextEdit, can be used to create
-Web pages, they don't necessarily offer a lot of help towards that end.
-Some others offer more facilities for error checking, syntax coloring
-and saving some typing by filling things out for you. Check the
-following sample:
+
While any text editor, like NotePad or TextEdit, can be used to create Web pages,
+they don’t necessarily offer a lot of help towards that end. Some others offer more
+facilities for error checking, syntax coloring and saving some typing by filling
+things out for you. Check the following sample:
-- [Sublime Text](https://www.sublimetext.com/) is quite popular with
- developers, though there can be a bit of a learning curve to use its
- many features.
-
-- [Notepad](https://notepad-plus-plus.org/) - on Windows PC's.
-
-- [Visual Studio](https://visualstudio.microsoft.com/) - on Windows
- PC's, many developers are already familiar with it.
-
-- TextEdit - This is available on Macs, but be sure you're [saving as
- plain text](https://discussions.apple.com/message/5014514#5014514),
- not as a .rtf or .doc file.
-
-- [BlueGriffon](http://bluegriffon.org/) is a WYSIWYG ("What You See
- Is What You Get") content editor for the World Wide Web. Powered by
- Gecko, the rendering engine of Firefox, it's a modern and robust
- solution to edit Web pages in conformance to the latest Web
- Standards.
-
-- [XCode](https://developer.apple.com/xcode/) - Mac developers may be
- familiar with XCode.
-
-- [Atom](https://atom.io/) is another cross platform editor, created
- by [GitHub](https://github.com/).
-
-- [Vim](https://www.vim.org/) or [Emacs](https://www.gnu.org/software/emacs/) are
- great editors on which the Web was built, but if you're not already
- familiar with these, this isn't the time to try.
+
+
+ Sublime Text is quite popular with developers, though there can be a bit of a
+ learning curve to use its many features.
Visual Studio - on Windows PC’s, many developers are already
+ familiar with it.
+
TextEdit - This is available on Macs, but be sure you’re
+ saving as plain text, not as a .rtf or .doc file.
+
+ BlueGriffon is a WYSIWYG (“What You See Is What You Get”) content editor for
+ the World Wide Web. Powered by Gecko, the rendering engine of Firefox, it’s a
+ modern and robust solution to edit Web pages in conformance to the latest Web
+ Standards.
+
XCode - Mac developers may be familiar with XCode.
+
Atom is another cross platform editor, created by GitHub.
+
Vim or Emacs are great editors on which the Web
+ was built, but if you’re not already familiar with these, this isn’t the time to try.
+
-To help you practice during the whole duration of the course, you will
-use the following online editor tools. Pretty much all the course's
-examples will actually use these.
+
To help you practice during the whole duration of the course, you will use the
+following online editor tools. Pretty much all the course’s examples will actually
+use these.
JS Bin
-
-
-
+
-
-
-JS Bin is an open source collaborative Web development
-debugging tool. This tool is really simple, just open the link to the provided examples,
-look at the code, look at the result, etc. And you can modify the examples as you like,
-you can also modify / clone / save / share them.
+ alt="JS Bin logo."
+ style="width:10%;" />
-Tutorials can be found on the Web (such as this
-one) or on YouTube. Keep in mind that it's always better to be logged in (it's
-free) if you do not want to lose your contributions/personal work.
+
JS Bin
+is an open source collaborative Web development debugging tool. This tool is really
+simple, just open the link to the provided examples, look at the code, look at the
+result, etc. And you can modify the examples as you like, you can also modify /
+clone / save / share them.
+
+
Tutorials can be found on the Web (such as this one) or on YouTube. Keep in mind that
+it’s always better to be logged in (it’s free) if you do not want to lose your
+contributions/personal work.
CodePen
-
-
-
-
-
-CodePen is an HTML, CSS, and JavaScript code editor
-that previews/showcases your code bits in your browser. It helps with cross-device
-testing, real-time remote pair programming and teaching.
-
-This is a great service to get you started quickly as it doesn't require
-you to download anything and you can access it, along with your saved
-projects from any Web browser. Here's an article which will be
-of-interest if you use CodePen: Things you can do with
-CodePen [Brent Miller, February 6, 2019].
-
-There are many other handy tools such as JSFiddle, and Dabblet (Lea
-Verou's tool that we will use extensively in a future CSS course).
-Please share your favorite tool on the discussion forum, and explain
-why! Share also your own code contributions, such as a nice canvas
-animation, a great looking HTML5 form, etc. Sharing them using JS Bin,
-or similar tools, would be really appreciated.
+
+
+
+
CodePen
+is an HTML, CSS, and JavaScript code editor that previews/showcases your code bits in
+your browser. It helps with cross-device testing, real-time remote pair programming
+and teaching.
+
+
This is a great service to get you started quickly as it doesn’t require you to
+download anything and you can access it, along with your saved projects from any Web
+browser. Here’s an article which will be of-interest if you use CodePen: Things you can do with CodePen [Brent
+Miller, February 6, 2019].
+
+
There are many other handy tools such as JSFiddle, and Dabblet (Lea Verou’s tool
+that we will use extensively in a future CSS course). Please share your favorite
+tool on the discussion forum, and explain why! Share also your own code contributions,
+such as a nice canvas animation, a great looking HTML5 form, etc. Sharing them using
+JS Bin, or similar tools, would be really appreciated.
W3C Validators
-For over 20 years, the W3C has been developing and
-hosting [free and open source tools] used every day
-by millions of Web developers and Web designers. All the tools listed below are Web-based, and are
-available as downloadable sources or as free services on the W3C Developers tools site.
+
For over 20 years, the W3C has been developing and hosting
+
+[free and open source tools] used every day by millions of Web developers and Web
+designers. All the tools listed below are Web-based, and are available as
+downloadable sources or as free services on the
+W3C Developers tools site.
-The [CSS validator](https://jigsaw.w3.org/css-validator/) checks
-Cascading Style Sheets (CSS) and (X)HTML documents that use CSS
-stylesheets.
-
+
The CSS validator checks Cascading Style Sheets (CSS) and (X)HTML documents
+that use CSS stylesheets.
-
-
-
-
+
+
Unicorn
-Unicorn is W3C's unified validator,
-which helps people improve the quality of their Web pages by performing
-a variety of checks. Unicorn gathers the results of the popular HTML and
-CSS validators, as well as other useful services, such as RSS/Atom feeds
-and http headers.
+
Unicorn is W3C’s unified validator, which helps people improve
+the quality of their Web pages by performing a variety of checks. Unicorn gathers
+the results of the popular HTML and CSS validators, as well as other useful
+services, such s RSS/Atom feeds and http headers.
Link Checker
-The W3C Link Checker looks for
-issues in links, anchors and referenced objects in a Web page, CSS style
-sheet, or recursively on a whole Web site. For best results, it is
-recommended to first ensure that the documents checked use valid HTML
-Markup.
+
The W3C Link Checker looks for issues in links, anchors and referenced
+objects in a Web page, CSS style sheet, or recursively on a whole Web site. For
+best results, it is recommended to first ensure that the documents checked use
+valid W3 Validator and CSS HTML Markup.
Internationalization Checker
-The W3C Internationalization
-Checker provides information about various internationalization-related aspects of your page,
-including the HTTP headers that affect it. It also reports a number of
-issues and offers advice about how to resolve them.
-
-
W3C CheatSheet
-
-The W3C cheatsheet provides quick
-access to useful information from a variety of specifications published
-by W3C. It aims at giving in a very compact and mobile-friendly format a
-compilation of useful knowledge extracted from W3C specifications,
-completed by summaries of guidelines developed at W3C, in particular Web
-accessibility guidelines, the Mobile Web Best Practices, and a number of
-internationalization tips.
-
-
-
-
-
-
-Its main feature is a lookup search box, where one can start typing a
-keyword and get a list of matching
-properties/elements/attributes/functions in the above-mentioned
-specifications, and further details on those when selecting the one of
-interest.
-
-The W3C cheatsheet is only available as a pure Web
-application.
+
The W3C Internationalization Checker provides information
+about various internationalization-related aspects of your page, including the HTTP
+headers that affect it. It also reports a number of issues and offers advice about
+how to resolve them.
+
+
a2. W3C CheatSheet
+
+
The W3C cheatsheet provides quick access to useful information from a
+variety of specifications published by W3C. It aims at giving in a very compact and
+mobile-friendly format a compilation of useful knowledge extracted from W3C
+specifications, completed by summaries of guidelines developed at W3C, in
+particular Web accessibility guidelines, the Mobile Web Best Practices, and a number
+of internationalization tips.
+
+
+
+
+
Its main feature is a lookup search box, where one can start typing a keyword
+and get a list of matching properties/elements/attributes/functions in the
+above-mentioned specifications, and further details on those when selecting the one
+of interest.
-The term browser compatibility refers to the ability of a given Web site
-to appear fully functional on the browsers available in the market.
-
-The most powerful aspect of the Web is what makes it so challenging to
-build for: its universality. When you create a Web site, you’re writing
-code that needs to be understood by many different browsers on different
-devices and operating systems!
-
-To make the Web evolve in a sane and sustainable way for both users and
-developers, browser vendors work together to standardize new features,
-whether it’s a new HTML
-element, CSS property,
-or JavaScript API.
-But different vendors have different priorities, resources, and release
-cycles — so it’s very unlikely that a new feature will land on all the
-major browsers at once. As a Web developer, this is something you must
-consider if you’re relying on a feature to build your site.
-
-We are then providing references to the browser support of HTML5
-features presented in this course using 2 resources: Can I
-Use and Mozilla Developer Network (MDN) Web
-Docs.
-
-
Can I use
-
-Can I Use provides up-to-date tables for support
-of front-end Web technologies on desktop and mobile Web browsers. Below
-is a snapshot of what information is given by CanIUse when searching for
-"CSS3 colors".
-
-
-
-
-
+
The term browser compatibility refers to the ability of a given Web
+site to appear fully functional on the browsers available in the
+market.
+
+
The most powerful aspect of the Web is what makes it so challenging
+to build for: its universality. When you create a Web site, you’re
+writing code that needs to be understood by many different browsers on
+different devices and operating systems!
+
+
To make the Web evolve in a sane and sustainable way for both users
+and developers, browser vendors work together to standardize new
+features, whether it’s a
+new HTML element, CSS property, or
+JavaScript API. But different vendors have
+different priorities, resources, and release cycles — so it’s very unlikely
+that a new feature will land on all the major browsers at once. As a Web
+developer, this is something you must consider if you’re relying on a feature
+to build your site.
Can I Use provides up-to-date
+tables for support of front-end Web technologies on desktop and mobile
+Web browsers. Below is a snapshot of what information is given by
+CanIUse when searching for “CSS3 colors”.
+
+
+
MDN Web Docs
-
-
-
-
-
+
+
-To help developers make these decisions consciously rather than
-accidentally,
+
To help developers make these decisions consciously rather than
+accidentally,
-MDN Web Docs provides browser
-compatibility tables in its documentation pages, so that when looking up a feature
-you’re considering for your project, you know exactly which browsers will support it.
+
MDN Web Docs provides
+browser compatibility tables in its documentation pages, so that when
+looking up a feature you’re considering for your project, you know
+exactly which browsers will support it.
-Most of the technologies you use when developing Web applications and
+
Most of the technologies you use when developing Web applications and
Web sites are designed and standardized in W3C in a completely open and
-transparent process.
+transparent process.
-In fact, all W3C specifications are developed in public GitHub
-repositories, so if you are familiar with GitHub, you already know how to contribute to W3C
-specifications! This is all about raising issues (with feedback and suggestions) and/or bringing
-pull requests to fix identified issues.
+
In fact, all W3C specifications are developed in
+public
+GitHub repositories, so if you are familiar with GitHub, you already know how
+to contribute to W3C specifications! This is all about raising issues (with feedback
+and suggestions) and/or bringing pull requests to fix identified issues.
-Contribute
+
Contribute
-Contributing to this standardization process might be a bit scary or
+
Contributing to this standardization process might be a bit scary or
hard to approach at first, but understanding at a deeper level how these
-technologies are built is a great way to build your expertise.
-
-
-
-
-
-
-If you're looking to an easy way to dive into this standardization
-processes, check out which [issues in the W3C GitHub repositories have
-been marked as "good first
-issue"](https://github.com/search?q=org%3Aw3c+label%3A%22good+first+issue%22+state%3Aopen&type=Issues) and
-see if you find anything where you think you would be ready to help.
-
-
Shape the future
-
-
-
-
-
-
-
-Another approach is to go and bring feedback on ideas for future
-technologies: the [W3C Web Platform Community Incubator
-Group](https://wicg.io/) was built as an easy place to get started to
-provide feedback on new proposals or bring brand-new proposals for
-consideration.
-
-Happy Web building!
+technologies are built is a great way to build your expertise.
Another approach is to go and bring feedback on ideas for future
+technologies: the W3C Web Platform Community Incubator Group was built as an
+easy place to get started to provide feedback on new proposals or bring
+brand-new proposals for consideration.
+
+
Happy Web building!
What is W3C?
-
-
-
-
-
+
+
-As steward of global Web standards, W3C's mission is to safeguard the
-openness, accessibility, and freedom of the World Wide Web from a
-technical perspective.
+
As steward of global Web standards, W3C’s mission is to safeguard
+the openness, accessibility, and freedom of the World Wide Web from a
+technical perspective.
-W3C's primary activity is to develop protocols and guidelines that
+
W3C’s primary activity is to develop protocols and guidelines that
ensure long-term growth for the Web. The widely adopted Web standards
-define key parts of what actually makes the World Wide Web work.
+define key parts of what actually makes the World Wide Web work.
A few history bits
-In March 1989, while at CERN, Sir Tim
-Berners-Lee wrote “Information
-Management: A Proposal”
-outlining the World Wide Web. Tim’s memo was about to revolutionize
-communication around the globe. He then created the first Web browser,
-server, and Web page. He wrote the first specifications for URLs, HTTP,
-and HTML.
-
-
-
-
-
-
-
Tim Berners-Lee at his desk in CERN, 1994
-
-In October 1994, Tim Berners-Lee founded the World Wide Web Consortium
-(W3C) at the Massachusetts Institute of Technology, Laboratory for
-Computer Science [MIT/LCS] in collaboration
-with CERN, where the Web originated (see
-information on the original CERN Server),
-with support from DARPA and the European
-Commission.
-
-In April 1995, Inria became the first European W3C host, followed
-by Keio University of Japan (Shonan Fujisawa
-Campus) in Asia in 1996. In 2003, ERCIM took
-over the role of European W3C Host from Inria. In 2013, W3C announced Beihang University as the fourth Host.
+
In March 1989, while at
+CERN, Sir Tim Berners-Lee wrote “Information Management: A Proposal”
+outlining the World Wide Web. Tim’s memo was about to revolutionize communication
+around the globe. He then created the first Web browser, server, and Web page. He
+wrote the first specifications for URLs, HTTP, and HTML.
+
+
+
+
+
+Tim Berners-Lee at his desk in CERN, 1994
+
+
+
In October 1994, Tim Berners-Lee founded the World Wide Web Consortium (W3C)
+at the Massachusetts Institute of Technology, Laboratory for Computer Science
+[MIT/LCS] in collaboration with CERN, where the Web originated (see information
+on the original CERN Server), with support from DARPA and the
+European Commission.
+
+
In April 1995, Inria became the first European W3C host, followed by
+
+Keio University of Japan (Shonan Fujisawa Campus) in Asia in 1996. In 2003,
+ERCIM
+took over the role of European W3C Host from Inria. In 2013, W3C announced Beihang
+University as the fourth Host.
A few figures
-As of August 2020, W3C:
-
-- Is a [member](https://www.w3.org/Consortium/Member/List)-driven
-organization composed of approx 430 companies, universities, start-ups,
-etc. from all over the world.
-
-- Holds 46 [technical groups](https://www.w3.org/groups/),
-including [Working Groups](https://www.w3.org/groups/wg/) and [Interest
-Groups](https://www.w3.org/groups/ig/) where technical specifications
-are discussed and developed.
-
-- Published over 7,254 [published technical
-reports](https://www.w3.org/TR/), including 434 Web standards (or W3C
-Recommendations) - since January 1st,1995.
-
-- Runs a [translation
-program](https://www.w3.org/Consortium/Translation/) to foster the
-translation of its specifications: see the [translation
-matrix](https://www.w3.org/Consortium/Translation/matrix.html) currently
-listing 309 available translations of W3C recommendations.
-
-- Hosts 338 [Community and Business
-Groups](https://www.w3.org/community/groups/), where developers,
-designers, and anyone passionate about the Web have a place to hold
-discussions and publish ideas.
+
As of August 2020, W3C:
-- Gathers over 13,129 active participants constituting the W3C community.
-
-- Has a [technical staff](https://www.w3.org/People/) composed of 64
-people, spread on all five continents
+
+
Is a member-driven organization composed of approx
+ 430 companies, universities, start-ups, etc. from all over the world.
Published over 7,254 published technical reports, including 434
+ Web standards (or W3C Recommendations) - since January 1st,1995.
+
Runs a translation program to foster the translation of
+ its specifications: see the translation matrix currently
+ listing 309 available translations of W3C recommendations.
+
Hosts 338 Community and Business Groups, where developers,
+ designers, and anyone passionate about the Web have a place to hold discussions
+ and publish ideas.
+
Gathers over 13,129 active participants constituting the W3C community.
+
Has a technical staff composed of 64 people, spread on all five continents
+
-
W3C's core values
+
W3C’s core values
-Committed to core values of an open Web that promotes innovation,
-neutrality, and interoperability, W3C and its community are setting the
-vision and standards for the Web, ensuring the building blocks of the
-Web are open, accessible, secure, international and have been developed
-via the collaboration of global technical experts.
+
Committed to core values of an open Web that promotes innovation, neutrality,
+and interoperability, W3C and its community are setting the vision and standards
+for the Web, ensuring the building blocks of the Web are open, accessible, secure,
+international and have been developed via the collaboration of global technical experts.
Essential Steps in Web i18n
-You find below three examples (and checks!) to help you to ensure that
-your Web page works for people around the world, and to make it work
-differently for different cultures, where needed. Let's meet the words
-'charset' and 'lang', soon to become your favorite markup ;)
+
You find below three examples (and checks!) to help you to ensure that your Web
+page works for people around the world, and to make it work differently for different
+cultures, where needed. Let’s meet the words ‘charset’ and ‘lang’, soon to become
+your favorite markup ;)
Example #1: character encoding declaration
-A character encoding declaration is vital to ensure that the text in
-your page is recognized by browsers around the world, and not garbled.
-You will learn more about what this is, and how to use it as you work
-through the course. For now, just ensure that it's always there.
+
A character encoding declaration is vital to ensure that the text in your page
+is recognized by browsers around the world, and not garbled. You will learn more
+about what this is, and how to use it as you work through the course. For now, just
+ensure that it’s always there.
-Check #1: There is a character encoding declaration near the start
-of your source code, and its value is UTF-8.
+
Check #1: There is a character encoding declaration near the start of
+your source code, and its value is UTF-8.
-For a wide variety of reasons, it's important for a browser to know what
-language your page is written in, including font selection,
+
For a wide variety of reasons, it’s important for a browser to know
+what language your page is written in, including font selection,
text-to-speech conversion, spell-checking, hyphenation and automated
line breaking, text transforms, automated translation, and more. You
-should always indicate the primary language of your page in the <html>
-tag. Again you will learn how to do this during the course. You will
-also learn how to change the language, where necessary, for parts of
-your document that are in a different language.
+should always indicate the primary language of your page in the
+<html> tag. Again you will learn how to do this during the
+course. You will also learn how to change the language, where
+necessary, for parts of your document that are in a different
+language.
-Check #2: The HTML tag has a lang attribute which correctly
-indicates the language of your content.
+
Check #2: The HTML tag has a lang attribute which correctly
+indicates the language of your content.
-This example below indicates that the page is in French.
+
This example below indicates that the page is in French.
-People around the world don't always understand cultural references that
-you are familiar with, for example the concept of a 'home run' in
+
People around the world don’t always understand cultural references
+that you are familiar with, for example the concept of a ‘home run’ in
baseball, or a particular type of food. You should be careful when using
-examples to illustrate ideas. Also, people in other cultures don't
+examples to illustrate ideas. Also, people in other cultures don’t
necessarily identify with pictures that you would recognize, for
example, hand gestures can have quite unexpected meanings in other parts
of the world, and photos of people in a group may not be representative
of populations elsewhere. When creating forms for capturing personal
details, you will quickly find that your assumptions about how personal
names and addresses work are very different from those of people from
-other cultures.
+other cultures.
-Check #3: If your content will be seen by people from diverse
-cultures, check that your cultural references will be recognized and
-that there is no inappropriate cultural bias.
+
Check #3: If your content will be seen by people from
+diverse cultures, check that your cultural references will be recognized
+and that there is no inappropriate cultural bias.
-Don't worry!
+
Don’t worry!
-The following 7 quick tips summarize some important concepts of
+
The following 7 quick tips summarize some important concepts of
international Web design. They will become more meaningful as you work
-through the course, so come back and review this page at the end.
-
-1. Encoding: use the UTF-8 (Unicode) character encoding for
- content, databases, etc. Always declare the encoding.
-
-2. Language: declare the language of documents and indicate
- internal language changes.
+through the course, so come back and review this page at the end.
-3. Navigation: on each page include clearly visible navigation to
- localized pages or sites, using the target language.
-
-4. Escapes: use characters rather than escapes (e.g. á á
- or á) whenever you can.
-
-5. Forms: use UTF-8 on both form and server. Support local formats
- of names/addresses, times/dates, etc.
-
-6. Localizable styling: use CSS styling for the presentational
- aspects of your page. So that it's easy to adapt content to suit the
- typographic needs of the audience, keep a clear separation between
- styling and semantic content, and don't use 'presentational' markup.
-
-7. Images, animations & examples: if your content will be seen by
- people from diverse cultures, check for translatability and
- inappropriate cultural bias.
-
-You will find more quick tips on the
- Internationalization quick tips page. Remember that these tips do not constitute complete guidelines.
+
+
Encoding: use the UTF-8 (Unicode) character encoding for content,
+ databases, etc. Always declare the encoding.
+
Language: declare the language of documents and indicate internal
+ language changes.
+
Navigation: on each page include clearly visible navigation to
+ localized pages or sites, using the target language.
+
Escapes: use characters rather than escapes (e.g. á á or á)
+ whenever you can.
+
Forms: use UTF-8 on both form and server. Support local formats
+ of names/addresses, times/dates, etc.
+
Localizable styling: use CSS styling for the presentational aspects
+ of your page. So that it’s easy to adapt content to suit the typographic needs
+ of the audience, keep a clear separation between styling and semantic content,
+ and don’t use ‘presentational’ markup.
+
Images, animations & examples: if your content will be seen by
+ people from diverse cultures, check for translatability and inappropriate
+ cultural bias.
+
-
Internationalization checker
+
You will find more quick tips on the
+Internationalization quick tips page. Remember that these tips do not
+constitute complete guidelines.
-When you start creating Web pages, you can also run them through the
-W3C's Internationalization
-Checker. If there are internationalization problems with your page,
-this checker explains what they are and what to do about it.
+
a3. Internationalization checker
+
When you start creating Web pages, you can also run them through the W3C’s
+Internationalization Checker. If there are internationalization
+problems with your page, this checker explains what they are and what to do about it.
-
Module 1 Advanced HTML5 multimedia
+
Module 1 Advanced HTML5 Multimedia
-In the [W3Cx HTML5 Coding Essentials and Best
-Practices](https://www.edx.org/course/html5-coding-essentials-and-best-practices) course,
-we saw that <video> and <audio> elements can
+
In the W3Cx
+HTML5 Coding Essentials and Best Practices course, we
+saw that <video> and <audio> elements can
have <track> elements. A <track> can have a label,
a kind (subtitles, captions, chapters, metadata, etc.), a language
-(srclang attribute), a source URL (src attribute), etc.
+(srclang attribute), a source URL (src attribute), etc.
-And here is how it renders in your current browser (please play the
-video and try to show/hide the subtitles/captions):
+
+
And here is how it renders in your current browser (please play the
+video and try to show/hide the subtitles/captions):
-
-
-
-
-
-Notice that the support for multiple tracks may differ significantly from one browser to another, in particular if you are using old versions.
+
+
-Here is a quick summary (as of May 2020).
+
Notice that the support for multiple tracks may differ significantly
+from one browser to another, in particular if you are using old versions.
-- Safari provides a menu you can use to choose which subtitle/caption
- track to display. If one of the defined text tracks has
- the default attribute set, then it is loaded by default. Otherwise,
- the default is off.
+
Here is a quick summary (as of May 2020).
+
+
Safari provides a menu you can use to choose which subtitle/caption track
+ to display. If one of the defined text tracks has the default attribute set,
+ then it is loaded by default. Otherwise, the default is off.
+
-
-
-
+
-
-- Chrome and Opera both provide a subtitle menu and load the text
- track set that matches the browser language. If none of the
- available text tracks match the browser’s language, then it loads
- the track with the default attribute, if there is one. Otherwise, it
- loads none. Let's say that support is very incomplete (!).
-
-- Firefox provides also a subtitle menu but will show the first
- defined text track only if it has default set. It will load all
- tracks in memory as soon as the page is loaded.
+
+
Chrome and Opera both provide a subtitle menu and load the text track set
+ that matches the browser language. If none of the available text tracks match
+ the browser’s language, then it loads the track with the default attribute, if
+ there is one. Otherwise, it loads none. Let’s say that support is very
+ incomplete (!).
+
Firefox provides also a subtitle menu but will show the first defined
+ text track only if it has default set. It will load all tracks in
+ memory as soon as the page is loaded.
+
-Also, there is [a Timed Text Track API in the HTML5/HTML5.1
-specification](https://www.w3.org/TR/html51/semantics-embedded-content.html#timed-text-tracks) that
-enables us to manipulate <track> contents from JavaScript. Do you
-recall that text tracks are associated with WebVTT files? As a quick
-reminder, let's look at a WebVTT file:
+
Also, there is a Timed Text Track API in the
+HTML5/HTML5.1 specification that enables us to manipulate <track>
+contents from JavaScript. Do you recall that text tracks are associated
+with WebVTT files? As a quick reminder, let’s look at a WebVTT file:
WEBVTT
- WEBTT code extract!
+ WEBVTT code!
-```
-1.
+
1.
2. 1
-3. 00:00:15.000 --> 00:00:18.000 align:start
-4. On the left we can see...
+3. 00:00:15.000 --> 00:00:18.000 align:start
+4. <v Proog>On the left we can see...</v>
5.
6. 2
-7. 00:00:18.167 --> 00:00:20.083 align:middle
-8. On the right we can see the...
+7. 00:00:18.167 --> 00:00:20.083 align:middle
+8. <v Proog>On the right we can see the...</v>
9.
10. 3
-11. 00:00:20.083 --> 00:00:22.000
-12. ...the head-snarlers
+11. 00:00:20.083 --> 00:00:22.000
+12. <v Proog>...the <c.highlight>head-snarlers</c></v>
13.
14. 4
-15. 00:00:22.000 --> 00:00:24.417 align:end
-16. Everything is safe. Perfectly safe.
-```
+15. 00:00:22.000 --> 00:00:24.417 align:end
+16. <v Proog>Everything is safe. Perfectly safe.</v>
-
-
-
-
-
-
-The different time segments are called "cues" and each cue has an id (1,
-2, 3 and 4 in the above example), a startTime and an endTime, and
-a text content that can contain HTML tags for styling (<b>, etc...)
-or be associated with a "voice" as in the above example. In this case,
-the text content is wrapped
-inside name_of_speaker>... elements.
+
+
-It's now time to look at the JavaScript API for manipulating tracks,
-cues, and events associated with their life cycle. In the following
-lessons, we will look at different examples which use this API to
-implement missing features such as:
-
-- how to build a menu for choosing the subtitle track language to
- display,
-
-- how to display a synchronized description of a video (useful for
- disabled people, for example),
+
The different time segments are called “cues” and each cue has an id (1, 2, 3 and 4 in the above example), a startTime and an endTime, and a text content that can contain HTML tags for styling (<b>, etc…) or be associated with a “voice” as in the above example. In this case, the text content is wrapped inside <v name_of_speaker>…</v> elements.
-- how to display a clickable transcript aside the video (similar to
- what the edX video player does),
-
-- how to show chapters,
-
-- how to use JSON encoded cue contents (useful for showing external
- resources in the HTML document while a video is playing),
-
-- etc.
+
It’s now time to look at the JavaScript API for manipulating tracks, cues, and events associated with their life cycle. In the following lessons, we will look at different examples which use this API to implement missing features such as:
+
+
how to build a menu for choosing the subtitle track language to display,
+
how to display a synchronized description of a video (useful for disabled people, for example),
+
how to display a clickable transcript aside the video (similar to what the edX video player does),
+
how to show chapters,
+
how to use JSON encoded cue contents (useful for showing external resources in the HTML document while a video is playing),
+
etc.
+
-
Module 1
+
Module 1 - Advanced HTML5 Multimedia
-Hi! Welcome to Part 2 of the W3C HTML5 course. How about we start by
-looking at advanced HTML5 multimedia features? First, we will look at
-the Web Audio API that helps processing and synthesizing audio in Web
-applications. You will be able to load sound samples into memory and
-play them, loop them, process them through a chain of sound effects such
-as reverberation, delay, graphic equalizer, compressor, distortion, etc.
-You can also write nice real time visualizations like dancing frequency graphs, animated waveforms that dance with the
-music, or generate music programmatically.
-
-The Web Audio API is particularly suited for games or for music
-applications.
+
Hi! Welcome to Part 2 of the W3C HTML5 course. How about we start by looking at
+advanced HTML5 multimedia features?
-A second nice multimedia feature is the Track API.
+
First, we will look at the Web Audio API that helps processing and synthesizing
+audio in Web applications. You will be able to load sound samples into memory and
+play them, loop them, process them through a chain of sound effects such as reverberation,
+delay, graphic equalizer, compressor, distortion, etc. You can also write nice real time
+visualizations like dancing frequency graphs, animated waveforms that dance with the
+music, or generate music programmatically.
-With it, you will be able to synchronize a video with elements in your
-document.
+
The Web Audio API is particularly suited for games or for music applications.
-For example: display a Google Map, an HTML description or a Wikipedia
-page aside the video, while it’s playing.
+
A second nice multimedia feature is the Track API.
-As always, do not hesitate to practice coding looking at the interactive
-examples, and then please share your own creations in the discussion
-forum!
-
-We hope you will enjoy this first week and we wish you the best!
+
With it, you will be able to synchronize a video with elements in your document.
+For example: display a Google Map, an HTML description or a Wikipedia page aside
+the video, while it’s playing. As always, do not hesitate to practice coding looking
+at the interactive examples, and then please share your own creations in the discussion forum!
+
We hope you will enjoy this first week and we wish you the best!
-
1.2.1 Intro to the Timed Text Track API
+
1.2 Intro to the Timed Text Track API
-Hi, today I've prepared for you a small example of a video that is
-associated with three different tracks.
+
Hi, today I’ve prepared for you a small example of a video that is associated with three different tracks.
-
-
-
-
+
+
-Two for subtitles, in English and in German, and one track for chapters.
+
Two for subtitles, in English and in German, and one track for chapters.
-First, before going further, let's look at how this is rendered in
-different browsers. With Google Chrome, we've got a CC button here, that
-will enable or disable subtitles. What we can see is that by default the
-subtitles that are displayed are in German (in this example).
+
First, before going further, let’s look at how this is rendered in different
+browsers. With Google Chrome, we’ve got a CC button here, that will enable or
+disable subtitles. What we can see is that by default the subtitles that are
+displayed are in German (in this example).
-And I can just switch them on and off. It loaded the first track that
-has the default attribute. And I have no menu for choosing what track, what language I
-want to be displayed here …
+
And I can just switch them on and off. It loaded the first track that has
+the default attribute. And I have no menu for choosing what track, what language
+I want to be displayed here …
-If we look at FireFox, it's even worse!
+
If we look at FireFox, it’s even worse!
-We don't have any menu at all, no CC button. I cannot switch on the
-subtitles, because as of December 2015, FireFox will load only the first
-track, if it has the default attribute.
+
We don’t have any menu at all, no CC button. I cannot switch on the subtitles,
+because as of December 2015, FireFox will load only the first track, if it has
+the default attribute.
-This is not the case, so we don't have any subtitles and we cannot
-display them.
+
This is not the case, so we don’t have any subtitles and we cannot display
+them.
-With Safari, on my Mac, it's better because I've got a subtitle menu and
-I can choose between the different tracks. I can switch to English
-subtitles, and english subtitles will be displayed.
+
With Safari, on my Mac, it’s better because I’ve got a subtitle menu and I
+can choose between the different tracks. I can switch to English subtitles,
+and english subtitles will be displayed.
-I'm on a MacIntosh so I cannot show you... but with other browsers like
-Internet Explorer or Microsoft Edge, there are situations similar to
-Safari.
+
I’m on a MacIntosh so I cannot show you… but with other browsers like Internet
+Explorer or Microsoft Edge, there are situations similar to Safari.
-What can we do to increase the features of the default player? We can
-use what we call the Track API, for asking which tracks are available,
-activating them, and so on. Now, I would like to remind you the
-structure of a track. I'm just going to display the content of one of
-these tracks.
+
What can we do to increase the features of the default player? We can use
+what we call the Track API, for asking which tracks are available, activating
+them, and so on. Now, I would like to remind you the structure of a track.
+I’m just going to display the content of one of these tracks.
-
-
-
+
-
-
-
-
+
-
-The tracks are made of cues and what we call a cue is a kind of time
-segment that is defined with a starting time and an ending time. And the
-cue can have an ID, in that case it is a numeric ID (1, 2, 3), and a
-content that can be HTML with bold, italic elements and it can also be a
-voice so when you see "v" followed by the name of the character that is
-speaking, it’s a voice.
-
-We are going to look at what we can do with such tracks during the
-course and we will see how to handle a chapter menu, how to display a
-nice transcript on the side of the video, that you can click to jump at
-the exact time the video tells the words that are on the transcript. And
-we will see also how to choose the subtitle or caption track language
-for the video.
-
-
-
- The tracks are made of cues and what we call a cue is a kind of time segment
+that is defined with a starting time and an ending time. And the cue can have
+an ID, in that case it is a numeric ID (1, 2, 3), and a content that can be HTML
+with bold, italic elements and it can also be a voice so when you see “v” followed
+by the name of the character that is speaking, it’s a voice.
+
+
We are going to look at what we can do with such tracks during the course and
+we will see how to handle a chapter menu, how to display a nice transcript on the
+side of the video, that you can click to jump at the exact time the video tells
+the words that are on the transcript. And we will see also how to choose the
+subtitle or caption track language for the video.
+
+
+
-
-
-This is finished for this small introduction video, I will just conclude
-by this thing here: explaining this crossOrigin="anonymous".
-We saw that during the HTML5 Part 1 course and many people asked questions about this.
-This is because of security constraints.
-
-In browsers, when you've got the HTML page that is on a different
-location than the video file and the tracks files, you will have
-security constraints errors. And if your server is configured for
-accepting different origins, then you can add this attribute
-crossOrigin="anonymous" in your HTML document and it is going to work.
-
-
-
-
-
-
-The server here: mainline.i3s.unice.fr has been configured for allowing
-external HTML pages to include the videos it hosts and the subtitles it
-hosts, this is the reason. You can use the DropBox public directory here
-because Dropbox also enables cross origin requests.
-This is all for this first video, I’ll see you in the next one!
+
This is finished for this small introduction video, I will just conclude by
+this thing here: explaining this crossOrigin=“anonymous”. We saw that during the
+HTML5 Part 1 course and many people asked questions about this. This is because
+of security constraints.
+
+
In browsers, when you’ve got the HTML page that is on a different location
+than the video file and the tracks files, you will have security constraints
+errors. And if your server is configured for accepting different origins, then
+you can add this attribute crossOrigin=“anonymous” in your HTML document and it
+is going to work.
+
+
+
+
+
The server here: mainline.i3s.unice.fr has been configured for allowing
+external HTML pages to include the videos it hosts and the subtitles it hosts,
+this is the reason. You can use the DropBox public directory here because
+Dropbox also enables cross origin requests.
The Timed Text Track JavaScript API
-In the W3Cx HTML5 Coding Essentials and Best Practices course, we saw that <video> and <audio> elements can
-have <track> elements. A <track> can have a label, a kind (subtitles, captions, chapters, metadata, etc.), a language (srclang attribute), a source URL (src attribute), etc.
-
-Here is a small example of a video with 3 different tracks ("......" masks the real URL here, as it is too long to fit in this page width!):
+
In the W3Cx HTML5 Coding Essentials and Best
+Practices course, we saw that <video> and <audio> elements can
+have <track> elements. A <track> can have a label, a kind (subtitles,
+captions, chapters, metadata, etc.), a language (srclang attribute), a source URL
+(src attribute), etc. Here is a small example of a video with 3 different tracks
+(“……” masks the real URL here, as it is too long to fit in this page width!):
-And here is how it renders in your current browser (please play the
-video and try to show/hide the subtitles/captions):
-
+
And here is how it renders in your current browser (please play the
+video and try to show/hide the subtitles/captions):
-
-
-
+
-
-Notice that the support for multiple tracks may differs significantly
+
Notice that the support for multiple tracks may differs significantly
from one browser to another, in particular if you are using old
-versions.
-
-You can read [this article by Ian Devlin: "HTML5 Video Captions –
-Current Browser
-Status"](http://www.iandevlin.com/blog/2015/04/html5/html5-video-captions-current-browser-status),
-written in April 2015, for further details.
+versions.
-- Safari provides a menu you can use to choose which subtitle/caption
- track to display. If one of the defined text tracks has
- the default attribute set, then it is loaded by default. Otherwise,
- the default is off.
-
+
+
Safari provides a menu you can use to choose which subtitle/caption track
+ to display. If one of the defined text tracks has the default attribute set,
+ then it is loaded by default. Otherwise, the default is off.
+
-
-
-
+
-
-
-- Chrome and Opera both provide a subtitle menu and load the text
- track set that matches the browser language. If none of the
- available text tracks match the browser’s language, then it loads
- the track with the default attribute, if there is one. Otherwise, it
- loads none. Let's say that support is very incomplete (!).
-- Firefox provides a subtitle menu but will show the first defined
- text track only if it has default set. It will load all tracks in
- memory as soon as the page is loaded.
+
+
Chrome and Opera both provide a subtitle menu and load the text track
+ set that matches the browser language. If none of the available text tracks
+ match the browser’s language, then it loads the track with the default
+ attribute, if there is one. Otherwise, it loads none. Let’s say that support
+ is very incomplete (!).
+
Firefox provides a subtitle menu but will show the first defined text
+ track only if it has default set. It will load all tracks in memory
+ as soon as the page is loaded.
+
-There is [a Timed Text Track API in the HTML5/HTML5.1
-specification](https://www.w3.org/TR/html51/semantics-embedded-content.html#timed-text-tracks) that
-enables us to manipulate <track> contents from JavaScript. Do you
-recall that text tracks are associated with WebVTT files? As a quick
-reminder, let's look at a WebVTT file:
+
There is a Timed Text Track API in the
+HTML5/HTML5.1 specification that enables us to manipulate <track>
+contents from JavaScript. Do you recall that text tracks are associated with
+WebVTT files? As a quick reminder, let’s look at a WebVTT file:
1. WEBVTT
2.
3. 1
-4. 00:00:15.000 --> 00:00:18.000 align:start
-5. On the left we can see...
+4. 00:00:15.000 --> 00:00:18.000 align:start
+5. <v Proog>On the left we can see...</v>
6.
7. 2
-8. 00:00:18.167 --> 00:00:20.083 align:middle
-9. On the right we can see the...
+8. 00:00:18.167 --> 00:00:20.083 align:middle
+9. <v Proog>On the right we can see the...</v>
10.
11. 3
-12. 00:00:20.083 --> 00:00:22.000
-13. ...the head-snarlers
+12. 00:00:20.083 --> 00:00:22.000
+13. <v Proog>...the <c.highlight>head-snarlers</c></v>
14.
15. 4
-16. 00:00:22.000 --> 00:00:24.417 align:end
-17. Everything is safe. Perfectly safe.
-```
+16. 00:00:22.000 --> 00:00:24.417 align:end
+17. <v Proog>Everything is safe. Perfectly safe.</v>
-The different time segments are called "cues" and each cue has an id (1,
-2, 3 and 4 in the above example), a startTime and an endTime, and
-a text content that can contain HTML tags for styling (<b>, etc...)
-or be associated with a "voice" as in the above example. In this case,
-the text content is wrapped
-inside name_of_speaker>... elements.
+
The different time segments are called “cues” and each cue has
+an id (1, 2, 3 and 4 in the above example), a startTime and an endTime,
+and a text content that can contain HTML tags for styling (<b>,
+etc…) or be associated with a “voice” as in the above example. In this
+case, the text content is wrapped
+inside <v name_of_speaker>…</v> elements.
-It's now time to look at the JavaScript API for manipulating tracks,
+
It’s now time to look at the JavaScript API for manipulating tracks,
cues, and events associated with their life cycle. In the following
lessons, we will look at different examples which use this API to
-implement missing features such as:
-
-- how to build a menu for choosing the subtitle track language to
- display,
-
-- how to display a synchronized description of a video (useful for
- disabled people, for example),
-
-- how to display a clickable transcript aside the video (similar to
- what the edX video player does),
-
-- how to show chapters,
-
-- how to use JSON encoded cue contents (useful for showing external
- resources in the HTML document while a video is playing),
-
-- etc.
+implement missing features such as:
+
+
how to build a menu for choosing the subtitle track language to
+ display,
+
how to display a synchronized description of a video (useful for
+ disabled people, for example),
+
how to display a clickable transcript aside the video (similar to
+ what the edX video player does),
+
how to show chapters,
+
how to use JSON encoded cue contents (useful for showing external
+ resources in the HTML document while a video is playing),
+
etc.
+
-
1.2.2 The HTML Track Element, getting the status of a track
+
1.2.2 The HTML Track Element, getting the status of a track
-Hi, in this video I will show you how we can work with the track
+
Hi, in this video I will show you how we can work with the track
elements from JavaScript, just to know which track has been loaded and
-which track is active.
-
-For that, we will manipulate different properties of the HTML track
-element from JavaScript.
-
-The first thing I am going to do is to add a small div at the end of the
-document for displaying the different track statuses. I added a div here called
-trackStatusesDiv with a heading... I am going to add some CSS to
-visualize this area.
-
-Like that, we will have the description of the track here. I added a
-border and some margins and so on. From JavaScript, we can not do
-anything before the page has been loaded, so I am adding a window.onload
-listener, and all the treatments will be in this function. The first
-thing I am going to do is get these track elements here... and I am
-going to get them in a variable called htmlTracks. How can I get them?
-
-I'm going to stop the automatic refresh on JSBin for the moment.
-
-So, querySelectorAll is a function that will return a collection with
-all the tracks, an array with all the tracks, all the HTML elements.
-I am going to call a function called displayTrackStatuses that I write here.
-I will first iterate on these tracks and display in the console the different
-values. We are doing a loop.
-
-I will first display something in the console.
-
-I am going to add a current track, it will be easier.
-
-I can write currentTrack.label for example, that will display the value
-of the different attributes.
-
-This is just for checking that my code is OK.
-
-Open the console, I have got one error...
-
-If I click here "Run with JS" I can see that it's working.
-
-What I can display is the label, I can also display the kind... you
-remember the kind: subtitles, subtitles, chapters for the different
-track subtitles, subtitles and chapters and I can also display the
-language with srclang. So English, Deutsch (for German).
-
-I can also display what is called the status. It is readyState, this is
-a property you can use only from JavaScript. It says that for track
+which track is active. For that, we will manipulate different properties
+of the HTML track element from JavaScript.
+
+
The first thing I am going to do is to add a small div at the end of
+the document for displaying the different track statuses. I added a div
+here called trackStatusesDiv with a heading… I am going to add some CSS
+to visualize this area. Like that, we will have the description of the
+track here. I added a border and some margins and so on. From JavaScript,
+we can not do anything before the page has been loaded, so I am adding a window.onload
+listener, and all the treatments will be in this function.
+
+
The first thing I am going to do is get these track elements here… and I am going
+to get them in a variable called htmlTracks. How can I get them?
+I’m going to stop the automatic refresh on JSBin for the moment.
+So, querySelectorAll is a function that will return a
+collection with all the tracks, an array with all the tracks, all the
+HTML elements. I am going to call a function called displayTrackStatuses
+that I write here. I will first iterate on these tracks and display in
+the console the different values. We are doing a loop.
+
+
I will first display something in the console. I am going to add a current
+track, it will be easier. I can write currentTrack.label for example, that
+will display the value of the different attributes. This is just for checking
+that my code is OK. Open the console, I have got one error… If I click here “Run with JS” I can
+see that it’s working. What I can display is the label, I can also display the
+kind… you remember the kind: subtitles, subtitles, chapters for the different
+track subtitles, subtitles and chapters and I can also display the language
+with srclang. So English, Deutsch (for German).
+
+
I can also display what is called the status. It is readyState, this
+is a property you can use only from JavaScript. It says that for track
number 1, the value is 0, for track number 2 is 2, for track number 3,
-it's 0. 2 means that the track is loaded and 0 means that the track is
-not available.
-
-We've got the first track, the English subtitles are not available:
-status readyState 0 and we've got the German subtitles that have been
+it’s 0. 2 means that the track is loaded and 0 means that the track is
+not available. We’ve got the first track, the English subtitles are not available:
+status readyState 0 and we’ve got the German subtitles that have been
loaded because it has the default attribute, and we showed that in
Google Chrome the track with the default attribute is loaded when the
-page is loaded.
+page is loaded.
-This is how we can consult the different statuses of a track from
+
This is how we can consult the different statuses of a track from
JavaScript.
-
I am going to copy and paste some code for just displaying this in a
-nicer way here.
-
-This is how we can display the different statuses and I can add a button
-that will call this just to refresh the different statuses.
-
-Let's add a button, we call it refresh, and when we click on it, we will call
-displayTrackStatuses.
-If I click here... just checking there is no error…it will refresh this thing. So now I am going to try this code with Safari; because you remember Safari has a menu for changing the
-different tracks.
-
-I prepared that already, so here I can start playing the same video that has Deutsch subtitles loaded by default, you can see that here.
-If I choose English subtitles now, and I refresh track statuses: you can see that the English subtitles are loaded now and the Deutsch subtitles are also available.
-We are going to use these different attributes for forcing some tracks to load programmatically
-from JavaScript and this will enable us to make a sort of menu for choosing the different tracks.
-
-I will explain that in a next video.
+nicer way here. This is how we can display the different statuses and I can add a
+button that will call this just to refresh the different statuses.
+
+
Let’s add a button, we call it refresh, and when we click on it, we
+will call displayTrackStatuses. If I click here… just checking there is
+no error…it will refresh this thing. So now I am going to try this code
+with Safari; because you remember Safari has a menu for changing the
+different tracks.
+
+
I prepared that already, so here I can start playing the same video
+that has Deutsch subtitles loaded by default, you can see that here. If
+I choose English subtitles now, and I refresh track statuses: you can
+see that the English subtitles are loaded now and the Deutsch subtitles
+are also available. We are going to use these different attributes for
+forcing some tracks to load programmatically from JavaScript and this
+will enable us to make a sort of menu for choosing the different
+tracks.
The HTML track element
-
Let's go back to our example. Below is the HTML code:
+
Let’s go back to our example. Below is the HTML code:
-
-This example defines three <track> elements. From JavaScript, we can
-manipulate these elements as "HTML elements" - we will call them the
-"HTML views" of tracks.
+
This example defines three <track> elements. From JavaScript,
+we can manipulate these elements as “HTML elements” - we will call them
+the “HTML views” of tracks.
- Function displayTrackStatuses code extract!
+ JavaScript code!
-```
-1. var video, htmlTracks;
+
1. var video, htmlTracks;
2. var trackStatusesDiv;
3. window.onload = function() {
4. // called when the page has been loaded
-5. video = document.querySelector("#myVideo");
-6. trackStatusesDiv = document.querySelector("#trackStatusesDiv");
+5. video = document.querySelector("#myVideo");
+6. trackStatusesDiv = document.querySelector("#trackStatusesDiv");
7. // Get the tracks as HTML elements
-8. htmlTracks = document.querySelectorAll("track");
+8. htmlTracks = document.querySelectorAll("track");
9. // displays their statuses in a div under the video
10. displayTrackStatuses(htmlTracks);
11. };
12. function displayTrackStatuses(htmlTracks) {
13. // displays track info
-14. for(var i = 0; i < htmlTracks.length; i++) {
+14. for(var i = 0; i < htmlTracks.length; i++) {
15. var currentHtmlTrack = htmlTracks[i];
-16. var label = "
label = " + currentHtmlTrack.label + "
";
-17. var kind = "
kind = " + currentHtmlTrack.kind + "
";
-18. var lang = "
lang = " + currentHtmlTrack.srclang + "
";
-19. var readyState = "
readyState = " + currentHtmlTrack.readyState + "
"
-20. trackStatusesDiv.innerHTML += "
Track:" + i + ":
"
-21. + "
" + label + kind + lang + readyState + "
";
+16. var label = "<li>label = " + currentHtmlTrack.label + "</li>";
+17. var kind = "<li>kind = " + currentHtmlTrack.kind + "</li>";
+18. var lang = "<li>lang = " + currentHtmlTrack.srclang + "</li>";
+19. var readyState = "<li>readyState = " + currentHtmlTrack.readyState + "</li>"
+20. trackStatusesDiv.innerHTML += "<li>Track:" + i + ":</b></li>"
+21. + "<ul>" + label + kind + lang + readyState + "</ul>";
22. }
-23. }
-```
+23. }
@@ -1232,293 +1243,260 @@ manipulate these elements as "HTML elements" - we will call them the
The code is rather straightforward:
-- We cannot access any HTML element before the page has been loaded.
- This is why we do all the work in the window.onload listener,
-
-- Line 6 we get a pointer to the div with id=trackStatusesDiv, that
- will be used to display track statuses,
-
-- Line 8: we get all the track elements in the document. They are
- HTML track elements,
-
-- Line 12: we call a function that will build some HTML to
- display the track status in the div we got from line 7.
-
-- Lines 14-19: we iterate on the HTML tracks, and for each track we
- get the label, the kind and the srclang attribute values. Notice,
- at line 19, the use of the readyState attribute, only used from
- JavaScript, that will give the current HTML track state.
+
+
We cannot access any HTML element before the page has been loaded. This is why we do all the work in the window.onload listener,
+
Line 6 we get a pointer to the div with id=trackStatusesDiv, that will be used to display track statuses,
+
Line 8: we get all the track elements in the document. They are HTML track elements,
+
Line 12: we call a function that will build some HTML to display the track status in the div we got from line 7.
+
Lines 14-19: we iterate on the HTML tracks, and for each track we get the label, the kind and the srclang attribute values. Notice, at line 19, the use of the readyState attribute, only used from JavaScript, that will give the current HTML track state.
+
-You can see on the screenshot (or [from the JSBin
-example)](https://jsbin.com/higebo/1/edit?html,css,js,output) that the
-German subtitle file has been loaded, and that none of the other tracks
-have been loaded.
+
You can see on the screenshot (or from the JSBin
+example) that the German subtitle file has been loaded, and that
+none of the other tracks have been loaded.
Possible values for the readyState attribute of HTML tracks:
-- 0 = NONE ; the text track's cues have not been obtained
-
-- 1 = LOADING ; the text track is loading with no errors yet.
- Further cues can still be added to the track by the parser
-
-- 2 = LOADED ; the text track has been loaded with no errors
-
-- 3 = ERROR ; the text track was enabled, but when the user agent
- attempted to obtain it, something failed. Some or all of the cues
- are likely missing and will not be obtained
-
-Now, it's time to look at the twin brother of an HTML track: the
-corresponding TextTrack object!
+
+
0 = NONE ; the text track’s cues have not been obtained
+
1 = LOADING ; the text track is loading with no errors yet. Further cues can still be added to the track by the parser
+
2 = LOADED ; the text track has been loaded with no errors
+
3 = ERROR ; the text track was enabled, but when the user agent attempted to obtain it, something failed. Some or all of the cues are likely missing and will not be obtained
+
+
Now, it’s time to look at the twin brother of an HTML track: the
+corresponding TextTrack object!
-
1.2.3 The TextTrack Object
+
1.2.3 The TextTrack Object
-Hi! Now we are preparing ourselves for reading the content of the
-different text tracks and display them.
+
Hi! Now we are preparing ourselves for reading the content of the
+different text tracks and display them.
-But before that, I must introduce what we call the TextTrack object that
-is a JavaScript object that is associated to the HTML elements. A track
-has two different views... maybe it is simpler to say that it has got a
-HTML view that means you can do a getElementById and manipulate the HTML
-element, this track element here, from JavaScript. Or we can also work
-with its twin brother that is a text track, and this new view is the one
-we are going to use for forcing a track to be loaded, and for reading
-its content. And for forcing subtitles or caption track to be displayed.
-We just slightly modify the previous example by displaying the mode
+
But before that, I must introduce what we call the TextTrack object
+that is a JavaScript object that is associated to the HTML elements. A
+track has two different views… maybe it is simpler to say that it has
+got a HTML view that means you can do a getElementById and manipulate
+the HTML element, this track element here, from JavaScript. Or we can
+also work with its twin brother that is a text track, and this new view
+is the one we are going to use for forcing a track to be loaded, and for
+reading its content. And for forcing subtitles or caption track to be
+displayed. We just slightly modify the previous example by displaying
+the mode
-The mode is a property from the TextTrack, not from the HTML track. And
-this mode can be "disabled", "showing" or "hidden". And when it is
+
The mode is a property from the TextTrack, not from the HTML track.
+And this mode can be “disabled”, “showing” or “hidden”. And when it is
disabled, reading the video will not fire any event related to the
track. We will talk about events later but a disabled track is the same
-if we have no track at all.
+if we have no track at all.
-A track that is "showing" is displayed in the video, if the
+
A track that is “showing” is displayed in the video, if the
implementation of the video player supports that. And a track that is
-"hidden" is just not displayed.
+“hidden” is just not displayed.
-How did we manipulate and access this mode property?
+
How did we manipulate and access this mode property?
-The displayTrackStatuses function, that we wrote earlier, displayed the
-different properties of the HTML track, like the label, the kind or the language.
-This time, we accessed his twin brother, the TextTrack by using the
-track property.
+
The displayTrackStatuses function, that we wrote earlier, displayed
+the different properties of the HTML track, like the label, the kind or
+the language. This time, we accessed his twin brother, the TextTrack by
+using the track property.
-Every HTML track element has a track property that is a TextTrack.
+
Every HTML track element has a track property that is a
+TextTrack.
-Here, from the current HTML track, I am getting the TextTrack
-(currentTextTrack).
+
Here, from the current HTML track, I am getting the TextTrack
+(currentTextTrack).
-This is the object we use to access the mode and display it here.
+
This is the object we use to access the mode and display it here.
Another interesting thing is that if we set the mode, if we modify the
-value of the mode, from "disabled" to "showing" or to "hidden", it will
+value of the mode, from “disabled” to “showing” or to “hidden”, it will
force the track to be loaded asynchronously in the background by the
-browser. We added in this example two buttons, "force load track 0" and
-"force load track 2" because by default, the track 0, the English
-subtitles, is not loaded. And the chapters, in track number 2, are
-not loaded either. We are going to force the track 0 to be loaded.
+browser. We added in this example two buttons, “force load track 0” and
+“force load track 2” because by default, the track 0, the English
+subtitles, is not loaded. And the chapters, in track number 2, are not
+loaded either. We are going to force the track 0 to be loaded.
-If I click here "force load track 0", you see that the status changes -
-the mode changes to "hidden" and the track now is loaded. What happened
-in the background?
+
If I click here “force load track 0”, you see that the status changes
+- the mode changes to “hidden” and the track now is loaded. What
+happened in the background?
-Let's have a look at the code we wrote. I am going to zoom a little
-bit...
+
Let’s have a look at the code we wrote. I am going to zoom a little
+bit…
-The button we clicked is this one: "force load track 0" here, called a
-function named forceLoadTrack(0) that I prepared.
+
The button we clicked is this one: “force load track 0” here, called
+a function named forceLoadTrack(0) that I prepared.
-What does this function do?
+
What does this function do?
-It will call another function called getTrack that will check if the
-track is already loaded.
+
It will call another function called getTrack that will check if the
+track is already loaded.
-If it is already loaded, then the second parameter here, is a callback
-function,
+
If it is already loaded, then the second parameter here, is a
+callback function,
-It will be called because the track is ready to be read.
+
It will be called because the track is ready to be read.
-In the case the track has not been loaded, we will set the mode to
-"hidden" and then we will trigger the browser so that it will load
-asynchronously, in the background, the track.
+
In the case the track has not been loaded, we will set the mode to
+“hidden” and then we will trigger the browser so that it will load
+asynchronously, in the background, the track.
-And when the track is ready, then, and only then, we will call
-readContent.
+
And when the track is ready, then, and only then, we will call
+readContent.
-Let's have a look at this getTrack function that we wrote. It says
+
Let’s have a look at this getTrack function that we wrote. It says
getTrack, please load me the TextTracks corresponding to the HTML track
-number n.
+number n.
-So here is the function. The first thing we do is that from the HTML
-track, we get the text track.
+
So here is the function. The first thing we do is that from the HTML
+track, we get the text track.
-Then we check on the HTML track if it is already loaded.
+
Then we check on the HTML track if it is already loaded.
-If it is the case, then we will call the function that has been passed
-as the second parameter: it's the readContent. And the readContent is just here, for
-the moment it will not read the content really, but it will just update the
-status.
+
If it is the case, then we will call the function that has been
+passed as the second parameter: it’s the readContent. And the
+readContent is just here, for the moment it will not read the content
+really, but it will just update the status.
-If I click on « force load track 2 » for example, it will load the track
-and when the track is arrived, it will call the displayStatus() that
-will show the updated status of the track.
+
If I click on « force load track 2 » for example, it will load the
+track and when the track is arrived, it will call the displayStatus()
+that will show the updated status of the track.
-In the case the track is not here, the readyState is not equal to 2,
-then we will force the track to be loaded. By doing this we set the mode to "hidden".
+
In the case the track is not here, the readyState is not equal to 2,
+then we will force the track to be loaded. By doing this we set the mode
+to “hidden”.
-This may will take some time: you understand that the browser is loading
-on the Web the track. It may take 2 seconds for example. We need to have
-a listener that will listen to the load event.
+
This may will take some time: you understand that the browser is
+loading on the Web the track. It may take 2 seconds for example. We need
+to have a listener that will listen to the load event.
-So htmlTrack.addEventListenner('load'...) will trigger only when the track has been loaded, and only in that case we will call
-the callback function: the readContent that has been passed in the
-second parameter, in order to read the track.
+
So htmlTrack.addEventListenner(‘load’…) will trigger only when the
+track has been loaded, and only in that case we will call the callback
+function: the readContent that has been passed in the second parameter,
+in order to read the track.
-If I look at the console, and if I start again the application. Only the
-second track has been loaded, I click "force load track 0", it says "forcing the track to
-be loaded", it loads the track and it calls the callback "reading content of loaded
-track".
+
If I look at the console, and if I start again the application. Only
+the second track has been loaded, I click “force load track 0”, it says
+“forcing the track to be loaded”, it loads the track and it calls the
+callback “reading content of loaded track”.
-If I click again the same button, it says "the text track is already
-loaded" and I am going to read it now. We cannot load a track several
+
If I click again the same button, it says “the text track is already
+loaded” and I am going to read it now. We cannot load a track several
times, if it is already loaded, we must just use it. In the next video,
we will show how we can effectively read the content of the track and do
-something with it.
+something with it.
-The object that contains the cues (subtitles or captions or chapter
+
The object that contains the cues (subtitles or captions or chapter
description from the WebVTT file) is not the HTML track itself. It is
-another object that is associated with it: a TextTrack object!
+another object that is associated with it: a TextTrack object!
-The TextTrack JavaScript object has different methods and properties for
-manipulating track content, and is associated with different events. But
-before going into detail, let's see how to obtain a TextTrack object.
+
The TextTrack JavaScript object has different methods and properties
+for manipulating track content, and is associated with different events.
+But before going into detail, let’s see how to obtain a TextTrack
+object.
-Obtaining a TextTrack object that corresponds to an HTML track
+
Obtaining a TextTrack object that corresponds to an HTML track
-First method: get a TextTrack from its associated HTML track.
+
First method: get a TextTrack from its associated HTML
+track.
-The HTML track element has a track property which returns the associated
-TextTrack object.
+
The HTML track element has a track property which returns the
+associated TextTrack object.
Example source code:
- Code extract!
-
-```
-// HTML tracks
-var htmlTracks = document.querySelectorAll("track");
-
-// The TextTrack object associated with the first HTML track
-var textTrack = htmlTracks[0].track;
-var kind = textTrack.kind;
-var label = textTrack.label;
-var lang = textTrack.language;
+ HTML code!
-// etc.
-```
+
1. // HTML tracks
+2. var htmlTracks = document.querySelectorAll("track");
+3. // The TextTrack object associated with the first HTML track
+4. var textTrack = htmlTracks[0].track;
+5. var kind = textTrack.kind;
+6. var label = textTrack.label;
+7. var lang = textTrack.language;
+8. // etc.
-
-Note that once we get a TextTrack object, we can manipulate
-the kind, label, language attributes (be careful, it's not srclang,
+
+
Note that once we get a TextTrack object, we can manipulate
+the kind, label, language attributes (be careful, it’s not srclang,
like the equivalent attribute name for HTML tracks). Other attributes
-and methods are described later in this lesson.
-
-Second method: get TextTrack from the HTML video element.
-
-The <video> element (and <audio> element too) has
-a TextTrack property accessible from JavaScript:
-
-```
-var videoElement = document.querySelector("#myVideo");
-var textTracks = videoElement.textTracks; // one TextTrack for each HTML track element
-var textTrack = textTracks[0]; // corresponds to the first track element
-var kind = textTrack.kind // e.g. "subtitles"
-var mode = textTrack.mode // e.g. "disabled", "hidden" or "showing"
-```
-
-The mode property of TextTrack objects TextTrack objects have a
-mode property, that is set to one of:
-
-1. "showing": the track is either already loaded, or is being loaded by
- the browser. As soon as it is completely loaded, subtitles or
- captions will be displayed in the video. Other kinds of track will
- be loaded but will not necessarily show anything visible in the
- document. All tracks that have mode="showing" will fire events
- while the video is being played.
-
-2. "hidden": the track is either already loaded, or is being loaded by
- the browser. All tracks that have mode="hidden" will fire events
- while the video is being played. Nothing will be visible in the
- standard video player GUI.
-
-3. "disabled": this is the mode where tracks are not being loaded. If a
- loaded track has its mode set to "disabled", it will stop firing
- events, and if it was in mode="showing" the subtitles or captions
- will stop being displayed in the video player.
-
-TextTrack content can only be accessed if a track has been loaded! Use
-the mode property to force a track to be loaded!
-
-BE CAREFUL: you cannot access a TextTrack content if the corresponding
-HTML track has not been loaded by the browser!
-It is possible to force a track to be loaded by
-setting the mode property of the TextTrack object to "showing" or
-"hidden".
-Tracks that are not loaded have their mode property of "disabled".
-
-Here is an example that will test if a track has been loaded, and if it
-hasn't, will force it to be loaded by setting its mode to "hidden". We
-could have used "showing"; in this case, if the file is a subtitle or a
-caption file, then the subtitles or captions will be displayed on the
-video as soon as the track has finished loading.
-
-[Try the example at
-JSBin](https://jsbin.com/bubeye/1/edit?html,console,output)
-
-
-
-
-
+
+
Second method: get TextTrack from the HTML video element.
+
+
The <video> element (and <audio> element too) has
+a TextTrack property accessible from JavaScript:
+
+
1. var videoElement = document.querySelector("#myVideo");
+2. var textTracks = videoElement.textTracks; // one TextTrack for each HTML track element
+3. var textTrack = textTracks[0]; // corresponds to the first track element
+4. var kind = textTrack.kind // e.g. "subtitles"
+5. var mode = textTrack.mode // e.g. "disabled", "hidden" or "showing"
+
+
The mode property of TextTrack objects TextTrack objects have a mode property, that is set to one of:
+
+
+
“showing”: the track is either already loaded, or is being loaded by the browser. As soon as it is completely loaded, subtitles or captions will be displayed in the video. Other kinds of track will be loaded but will not necessarily show anything visible in the document. All tracks that have mode=“showing” will fire events while the video is being played.
+
“hidden”: the track is either already loaded, or is being loaded by the browser. All tracks that have mode=“hidden” will fire events while the video is being played. Nothing will be visible in the standard video player GUI.
+
“disabled”: this is the mode where tracks are not being loaded. If a loaded track has its mode set to “disabled”, it will stop firing events, and if it was in mode=“showing” the subtitles or captions will stop being displayed in the video player.
+
+
+
TextTrack content can only be accessed if a track has been loaded!
+Use the mode property to force a track to be loaded!
+
+
BE CAREFUL: you cannot access a TextTrack content if the
+corresponding HTML track has not been loaded by the browser! It is
+possible to force a track to be loaded by setting the mode property of
+the TextTrack object to “showing” or “hidden”.
+Tracks that are not loaded have their mode property of “disabled”.
+
+
Here is an example that will test if a track has been loaded, and if
+it hasn’t, will force it to be loaded by setting its mode to “hidden”.
+We could have used “showing”; in this case, if the file is a subtitle or
+a caption file, then the subtitles or captions will be displayed on the
+video as soon as the track has finished loading.
+
+
+
+
-
-Here is what we added to the HTML code:
+
Here is what we added to the HTML code:
- Code extract!
-
-```
-1.
-6.
-```
+10. </button>
-
-The buttons will call a function named forceLoadTrack(trackNumber) that
-takes as a parameter the number of the track to get (and force load if
-necessary).
+
+
The buttons will call a function named forceLoadTrack(trackNumber) that takes as a parameter the number of the track to get (and force load if necessary).
-Here are the additions we made to the JavaScript code from the previous
-example:
+
Here are the additions we made to the JavaScript code from the
+previous example:
1. function readContent(track) {
+2. console.log("reading content of loaded track...");
3. displayTrackStatuses(htmlTracks); // update document with new
track statuses
4. }
@@ -1528,16 +1506,16 @@ example:
8. var textTrack = htmlTrack.track;
9.
10. if(htmlTrack.readyState === 2) {
-11. console.log("text track already loaded");
+11. console.log("text track already loaded");
12. // call the callback function, the track is available
13. callback(textTrack);
14. } else {
-15. console.log("Forcing the text track to be loaded");
+15. console.log("Forcing the text track to be loaded");
16.
17. // this will force the track to be loaded
-18. textTrack.mode = "hidden";
+18. textTrack.mode = "hidden";
// loading a track is asynchronous, we must use an event listener
-19. htmlTrack.addEventListener('load', function(e) {
+19. htmlTrack.addEventListener('load', function(e) {
20. // the track is arrived, call the callback function
21. callback(textTrack);
22. });
@@ -1549,416 +1527,407 @@ example:
28. // second = a callback function called when the track is loaded,
29. // that takes the loaded TextTrack as parameter
30. getTrack(htmlTracks[n], readContent);
-31. }
-```
+31. }
Explanations:
-- Lines 26-31: the function called when a button has been clicked.
- This function in turn calls the getTrack(trackNumber,
- callback) function. It passes the readContent callback function as a
- parameter. This is typical JavaScript asynchronous programming:
- the getTrack() function may force the browser to load the track and
- this can take some time (a few seconds), then when the track has
- downloaded, we ask the getTrack function to call the function we
- passed (the readContent function, which is known as
- a callback function), with the loaded track as a parameter.
-
-- Line 6: the getTrack function. It first checks if the HTML track
- is already loaded (line 10). If it is, it calls the callback
- function passed by the caller, with the loaded TextTrack as a
- parameter. If the TextTrack is not loaded, then it sets its mode to
- "hidden". This will instruct the browser to load the track. Because
- that may take some time, we must use a load event listener on the
- HTML track before calling the callback function. This allows us to
- be sure that the track is really completely loaded.
-
-- Lines 1-4: the readContent function is only called with a loaded
- TextTrack. Here we do nothing special for the moment except that we
- refresh the different track statuses in the HTML document.
-
-
-
1.2.4 Working With Cues
-
-Hi! We will continue the last example from the previous video, and this
-time when we will click on the "force load track 0" or "force load track
-2" buttons.
-
-You remember the track number 0 (the English subtitles) was not loaded,
-readyState=0 here says the track is not loaded. Track 2 also was not
-loaded: it contains the English chapters of the video. This time, I will
-explain how we can read the content of the file. If I click on "force load track 0", I see here
-the content of the WebVTT file. I didn't read it as pure text, I used
-the track API for accessing individually each cue, each one of these
-elements here is a cue, and I access the id, the start time, the end
-time and the content, that we call the text content of each cue. If I
-click on "force load track 2", I see the chapters definitions here, so
-chapter 1 of the video goes from 0 to 26 seconds, and it corresponds to
-the introduction part of the video.
-
-How did we do that? We just completed the readContent function that
+
+
Lines 26-31: the function called when a button has been clicked.
+ This function in turn calls the getTrack(trackNumber, callback) function. It
+ passes the readContent callback function as a parameter. This is typical JavaScript
+ asynchronous programming: the getTrack() function may force the browser to
+ load the track and this can take some time (a few seconds), then when the
+ track has downloaded, we ask the getTrack function to call the function we
+ passed (the readContent function, which is known as a callback function),
+ with the loaded track as a parameter.
+
Line 6: the getTrack function. It first checks if the HTML track
+ is already loaded (line 10). If it is, it calls the callback function
+ passed by the caller, with the loaded TextTrack as a parameter. If the TextTrack
+ is not loaded, then it sets its mode to “hidden”. This will instruct the browser
+ to load the track. Because that may take some time, we must use a load event
+ listener on the HTML track before calling the callback function. This allows
+ us to be sure that the track is really completely loaded.
+
Lines 1-4: the readContent function is only called with a loaded
+ TextTrack. Here we do nothing special for the moment except that we refresh
+ the different track statuses in the HTML document.
+
+
+
1.2.4 Working With Cues
+
+
Hi! We will continue the last example from the previous video, and
+this time when we will click on the “force load track 0” or “force load
+track 2” buttons.
+
+
You remember the track number 0 (the English subtitles) was not
+loaded, readyState=0 here says the track is not loaded. Track 2 also was
+not loaded: it contains the English chapters of the video. This time, I
+will explain how we can read the content of the file. If I click on
+“force load track 0”, I see here the content of the WebVTT file. I
+didn’t read it as pure text, I used the track API for accessing
+individually each cue, each one of these elements here is a cue, and I
+access the id, the start time, the end time and the content, that we
+call the text content of each cue. If I click on “force load track 2”, I
+see the chapters definitions here, so chapter 1 of the video goes from 0
+to 26 seconds, and it corresponds to the introduction part of the
+video.
+
+
How did we do that? We just completed the readContent function that
previously just showed the statuses of the different tracks. Remember
that when we clicked on a button, we forced the text track corresponding
to the HTML track to be loaded in memory, and then we can read it. A
-TextTrack object has different properties and the most important one is called cues. The cues is the list of every
-cue inside the VTT file, and each cue corresponds to a time segment, has
-an id and a text content.
+TextTrack object has different properties and the most important one is
+called cues. The cues is the list of every cue inside the VTT file, and
+each cue corresponds to a time segment, has an id and a text
+content.
-If you do track.cues, you've got the list of the cues and you can
-iterate on them.
+
If you do track.cues, you’ve got the list of the cues and
+you can iterate on them.
-For each cue, we are going to get its id: cue.id here. It corresponds to
-the id of the cue number i. In my example, I have got an index in the loop, I get the
-current cue,
+
For each cue, we are going to get its id: cue.id here. It corresponds
+to the id of the cue number i. In my example, I have got an index in the
+loop, I get the current cue,
-I get the id of this cue. I can also get the start time, the end time
+
I get the id of this cue. I can also get the start time, the end time
and the text. So, cue.text corresponds exactly at this sentence
highlighted here. This is the only thing I wanted to show you, because
the next time we are going to do something really interesting with this
content here, we are going to display on the side of the video a
clickable transcript. And when we will click on it, the video will jump
to the corresponding position. This is exactly what the edX video player
-does, the one you are watching at right now.
-
-A TextTrack object has different properties and methods;
-
-- kind: equivalent to the kind attribute of HTML track elements. Its
- value is either "subtitles", "caption", "descriptions", "chapters",
- or "metadata". We will see examples of chapters, descriptions and
- metadata tracks in subsequent lessons.
-
-- label: the label of the track, equivalent of the label attribute of
- HTML track elements.
-
-- language: the language of the text track, equivalent to
- the srclang attribute of HTML track elements (be careful: it's not
- the same spelling!)
-
-- mode: explained earlier. Can have values equal to:
- "disabled"|"hidden"|"showing". Can force a track to be loaded (by
- setting the mode to "hidden" or "showing").
-
-- cues: get a list of cues as a TextTrackCueList object. This is the
- complete content of the WebVTT file!
-
-- activeCues: used in event listeners while the video is playing.
- Corresponds to the cues located in the current time segment. The
- start and end times of cues can overlap. In reality this may
- rarely happen, but this property exists in case it does, returning
- a TextTrackCueList object that contains all active tracks at a
- given time.
-
-- addCue(cue): add a cue to the list of cues.
+does, the one you are watching at right now.
-- removeCue(cue): remove a cue from the list of cues.
+
A TextTrack object has different properties and methods;
-- getCueById(id): returns the cue with a given id (not implemented by
- all browsers - a polyfill is given in the examples from the next
- lessons).
-
-A TextTrackCueList is a collection of cues, each of which has different
-properties and methods;
-
-- id: the cue id as written in the line that starts cues in the WebVTT
- file.
-
-- startTime and endTime: define the time segment for the cue, in
- seconds, as a floating point value. It is not the formatted String
- we have in the WebVTT file (see screenshot below),
-
-- text: the cue content.
-
-- getCueAsHTML(): a method that returns an HTML version of the cue
- content, not as plain text.
+
+
kind: equivalent to the kind attribute of HTML track elements. Its value
+ is either “subtitles”, “caption”, “descriptions”, “chapters”, or “metadata”.
+ We will see examples of chapters, descriptions and metadata tracks in subsequent
+ lessons.
+
label: the label of the track, equivalent of the label attribute of HTML
+ track elements.
+
language: the language of the text track, equivalent to the srclang attribute
+ of HTML track elements (be careful: it’s not the same spelling!)
+
mode: explained earlier. Can have values equal to: “disabled”|“hidden”|“showing”.
+ Can force a track to be loaded (by setting the mode to “hidden” or “showing”).
+
cues: get a list of cues as a TextTrackCueList object. This is the complete
+ content of the WebVTT file!
+
activeCues: used in event listeners while the video is playing. Corresponds
+ to the cues located in the current time segment. The start and end times of cues
+ can overlap. In reality this may rarely happen, but this property exists in case
+ it does, returning a TextTrackCueList object that contains all active tracks at
+ a given time.
+
addCue(cue): add a cue to the list of cues.
+
removeCue(cue): remove a cue from the list of cues.
+
getCueById(id): returns the cue with a given id (not implemented by all
+ browsers - a polyfill is given in the examples from the next lessons).
+
-- Others such as align, line, position, size, snapToLines, etc., that
- correspond to the position of the cue, as specified in the WebVTT
- file. See the HTML5 course Part 1 about cue positioning.
+
A TextTrackCueList is a collection of cues, each of which has different
+properties and methods;
+
+
id: the cue id as written in the line that starts cues in the WebVTT file.
+
startTime and endTime: define the time segment for the cue, in seconds,
+ as a floating point value. It is not the formatted String we have in the WebVTT
+ file (see screenshot below).
+
text: the cue content.
+
getCueAsHTML(): a method that returns an HTML version of the cue content,
+ not as plain text.
+
Others such as align, line, position, size, snapToLines, etc., that correspond
+ to the position of the cue, as specified in the WebVTT file. See the HTML5 course
+ Part 1 about cue positioning.
+
-
-
-
+
+
-
Example that displays the content of a track
-[Here is an example at JSBin that displays the content of a
-track](https://jsbin.com/teruhay/1/edit?html,css,js,output):
-
+
-We just changed the content of the readContent(track) method from the
-example in the previous lesson:
+
We just changed the content of the readContent(track) method from the
+example in the previous lesson:
- Function readContent extract!
-
-```
-1. function readContent(track) {
-2. console.log("reading content of loaded track...");
-3. //displayTrackStatuses(htmlTracks);
-4. // instead of displaying the track statuses, we display
-5. // in the same div, the track content//
-6. // first, empty the div
-7. trackStatusesDiv.innerHTML = "";
+ JavaScript code!
+
+
1. function readContent(track) {
+2. console.log("reading content of loaded track...");
+3. //displayTrackStatuses(htmlTracks);
+4. // instead of displaying the track statuses, we display
+5. // in the same div, the track content//
+6. // first, empty the div
+7. trackStatusesDiv.innerHTML = "";
8.
-9. // get the list of cues for that track
-10. var cues = track.cues;
-11. // iterate on them
-12. for(var i=0; i < cues.length; i++) {
-13. // current cue
-14. var cue = cues[i];
-15. var id = cue.id + " ";
-16. var timeSegment = cue.startTime + " => " + cue.endTime + " ";
-17. var text = cue.text + "
"
-18. trackStatusesDiv.innerHTML += id + timeSegment + text;
-19. }
-20. }
-```
+9. // get the list of cues for that track
+10. var cues = track.cues;
+11. // iterate on them
+12. for(var i=0; i < cues.length; i++) {
+13. // current cue
+14. var cue = cues[i];
+15. var id = cue.id + "<br>";
+16. var timeSegment = cue.startTime + " => " + cue.endTime + "<br>";
+17. var text = cue.text + "<P>"
+18. trackStatusesDiv.innerHTML += id + timeSegment + text;
+19. }
+20. }
-As you can see, the code is simple: you first get the cues for the given
-TextTrack (it must be loaded; this is the case since we took care of it
-earlier), then iterate on the list of cues, and use the
-id, startTime, endTime and text properties of each cue.
-
-This technique will be used in one of the next lessons, and we will show
-you how to make a clickable transcript on the side of the video
-- something quite similar to what the edX video player does.
+
As you can see, the code is simple: you first get the cues for the given TextTrack
+(it must be loaded; this is the case since we took care of it earlier), then iterate
+on the list of cues, and use the id, startTime, endTime and text properties of each
+cue.
+
This technique will be used in one of the next lessons, and we will show you
+how to make a clickable transcript on the side of the video - something quite similar
+to what the edX video player does.
-
1.2.5 Listening to Events
+
1.2.5 Listening to Events
-Ok. This time we will talk about track events and cue events.
+
Ok. This time we will talk about track events and cue events.
-First, let's start by a small demonstration. If I play this video, the
-video is going on and I can listen to events like 'cuenter' and 'cueexit'.
+
First, let’s start by a small demonstration. If I play this video, the video
+is going on and I can listen to events like ‘cuenter’ and ‘cueexit’.
-Each time a new cue is entered, we will display it here, and each time
-it is exited, we will display it here.
+
Each time a new cue is entered, we will display it here, and each time it is
+exited, we will display it here.
-We saw that we can display them in sync with the video now.
-Each time a cue is reached, it means that the current time entered a new time
-segment defined by the starting and ending time of a cue.
+
We saw that we can display them in sync with the video now. Each time a cue
+is reached, it means that the current time entered a new time segment defined by
+the starting and ending time of a cue.
-Let's have a look again at one of the VTT files.
+
Let’s have a look again at one of the VTT files.
-Each cue holds a start time and a end time so when the time enters the
-15th second, we've got a cueenter event and we can get this content and show it on the HTML page.
+
Each cue holds a start time and a end time so when the time enters the 15th
+second, we’ve got a cueenter event and we can get this content and show it on the
+HTML page.
-When we go out of this time period, when we go further than 18 seconds, we exit this cue and we enter this cue.
+
When we go out of this time period, when we go further than 18 seconds, we exit
+this cue and we enter this cue.
-How are these events handled in the JavaScript code? Everything is done in the readContent method that we saw earlier.
+
How are these events handled in the JavaScript code? Everything is done in the
+readContent method that we saw earlier.
-This time instead of iterating on different cues of the TextTrack, we will just, for each cue
-individualy, we will iterate on the cues and add a listener on that cue,
-an exist listener and an enter listener.
+
This time instead of iterating on different cues of the TextTrack, we will just,
+for each cue individualy, we will iterate on the cues and add a listener on that
+cue, an exist listener and an enter listener.
-So what do we do? We iterate in the cues called addCueListenners for the current cue.
+
So what do we do? We iterate in the cues called addCueListenners for the current
+cue.
-This method, addCueListeners, it will define two listeners: the cue enter listener
-and the cue exit event listener.
+
This method, addCueListeners, it will define two listeners: the cue enter
+listener and the cue exit event listener.
-On the cue enter listener we just create a string "entered cue id=" and "text=" that will correspond to the text displayed when a cue is just reached.
+
On the cue enter listener we just create a string “entered cue id=” and “text=”
+that will correspond to the text displayed when a cue is just reached.
-We display the id and we display the text.
+
We display the id and we display the text.
-The same thing is done when we exit.
+
The same thing is done when we exit.
-When we exit, we just display the id "exited cue id=".
-This is how we can have individual enter and exit listeners for each cue.
+
When we exit, we just display the id “exited cue id=”. This is how we can have
+individual enter and exit listeners for each cue.
-It will enable us to highlight the current cue in a transcript while the video is playing.
+
It will enable us to highlight the current cue in a transcript while the video
+is playing.
-The only problem is that as of December 2015, FireFox still does not recognize these sorts of listeners.
+
The only problem is that as of December 2015, FireFox still does not recognize
+these sorts of listeners.
-The implementation is not done yet, so you can use a fallback.
-You can use a listener on the track that will listen to the 'cuechange' event.
+
The implementation is not done yet, so you can use a fallback. You can use a
+listener on the track that will listen to the ‘cuechange’ event.
-If I just comment the function addCueListeners and I uncomment this
-piece of code that has a cueChange listener on the track itself, then
-instead of knowing that we entered of exited and individual cue, we can
-get, for every new time segment, the list of the cues that are
-triggered, that should be activated and displayed for this time segment.
-As the different cues can overlap, the time segments can overlap -it is
-not often the case but it may occur- what the callback function from
-this listener gives is a list of active cues. Most of the time you have
-got only one.
+
If I just comment the function addCueListeners and I uncomment this piece of
+code that has a cueChange listener on the track itself, then instead of knowing
+that we entered of exited and individual cue, we can get, for every new time segment,
+the list of the cues that are triggered, that should be activated and displayed
+for this time segment. As the different cues can overlap, the time segments can
+overlap -it is not often the case but it may occur- what the callback function from
+this listener gives is a list of active cues. Most of the time you have got only one.
-Anyway, you can just work with the list of the cues. Here we just take
-the first active cue because we are assuming that the cues are not
-overlapping.
+
Anyway, you can just work with the list of the cues. Here we just take the first
+active cue because we are assuming that the cues are not overlapping.
-I added a small test here because sometimes we've got some strange ghost
-cues that are active and not defined. I do not exactly what was the problem
-when I test it but I added this test here to avoid some error messages...
+
I added a small test here because sometimes we’ve got some strange ghost cues
+that are active and not defined. I do not exactly what was the problem when I test
+it but I added this test here to avoid some error messages…
-We have got the first active cue and we just display it. We get the id,
-we get the text and this is all. In that case, if I run the application again, instead
-of having enter and exit, I will just have cue change events.
+
We have got the first active cue and we just display it. We get the id, we get
+the text and this is all. In that case, if I run the application again, instead
+of having enter and exit, I will just have cue change events.
-It starts at 15 seconds.
+
It starts at 15 seconds.
-I had a small bug here, it is not this.id but cue.id.... we can start again.
+
I had a small bug here, it is not this.id but cue.id…. we can start
+again.
-This is a fallback for FireFox if you want to display the cues in sync with the video.
+
This is a fallback for FireFox if you want to display the cues in sync with
+the video.
-The next video will show how to display a transcript here with the current cue highlighted
-and you can click on them in order to jump to the right place in the
-video.
+
The next video will show how to display a transcript here with the current cue
+highlighted and you can click on them in order to jump to the right place in the
+video.
-After that I think we will have seen the most useful properties, methods, events you can use with tracks and cues...
+
After that I think we will have seen the most useful properties, methods, events
+you can use with tracks and cues…
-Instead of reading the whole content of a track at once, like in the
-previous example, it might be interesting to process the track content
-cue by cue, while the video is being played.
+
Instead of reading the whole content of a track at once, like in the previous
+example, it might be interesting to process the track content cue by cue, while
+the video is being played.
-For example, you choose which track you want - say, German subtitles - and you want to display
-the subtitles in sync with the video, below the video, with your own
-style and animations...
+
For example, you choose which track you want - say, German subtitles - and you
+want to display the subtitles in sync with the video, below the video, with your
+own style and animations…
-Or you display the entire set of subtitles to the side of the video and you want to highlight the current one...
+
Or you display the entire set of subtitles to the side of the video and you
+want to highlight the current one…
-For this, you can listen for different sorts of events.
+
For this, you can listen for different sorts of events.
The two types of cue event are:
-1. enter and exit events fired for cues.
-
-2. cuechange events fired for TextTrack objects (good support).
+
+
enter and exit events fired for cues.
+
cuechange events fired for TextTrack objects (good support).
+
Example of cuechange listener on TextTrack
-```
-// track is a loaded TextTrack
-track.addEventListener("cuechange", function(e) {
- var cue = this.activeCues[0];
- console.log("cue change");
- // do something with the current cue
-});
-```
+
1. // track is a loaded TextTrack
+2. track.addEventListener("cuechange", function(e) {
+3. var cue = this.activeCues[0];
+4. console.log("cue change");
+5. // do something with the current cue
+6. });
-In the above example, let's assume that we have no overlapping cues for
-the current time segment.
+
In the above example, let’s assume that we have no overlapping cues
+for the current time segment.
-The above code listens for cue change events: when the video is being played, the time counter increases.
-And when this time counter value reaches time segments defined by one or
-more cues, the callback is called.
+
The above code listens for cue change events: when the video is being
+played, the time counter increases. And when this time counter value
+reaches time segments defined by one or more cues, the callback is
+called.
-The list of cues that are in the current time segments are in this.activeCues; where this represents the
-track that fired the event.
+
The list of cues that are in the current time segments are
+in this.activeCues; where this represents the track that fired the
+event.
-In the following lessons, we show how to deal with overlapping cues
-(cases where we have more than one active cue).
+
In the following lessons, we show how to deal with overlapping cues
+(cases where we have more than one active cue).
-Example of enter and exit event listeners on a track's cues.
+
Example of enter and exit event listeners on a track’s cues.
- Function addCueListeners code!
-
-```
-1. // iterate on all cues of the current track
-2. var cues = track.cues;
-3. for(var i=0, len = cues.length; i < len; i++) {
-4. // current cue, also add enter and exit listeners to it
-5. var cue = cues[i];
-6. addCueListeners(cue);
+ JavaScript code!
+
+
1. // iterate on all cues of the current track
+2. var cues = track.cues;
+3. for(var i=0, len = cues.length; i < len; i++) {
+4. // current cue, also add enter and exit listeners to it
+5. var cue = cues[i];
+6. addCueListeners(cue);
7.
-8. ...
-9. }
+8. ...
+9. }
10.
11. function addCueListeners(cue) {
12. cue.onenter = function(){
-13. console.log('enter cue id=' + this.id);
+13. console.log('enter cue id=' + this.id);
14. // do something
15. };
16.
17. cue.onexit = function(){
-18. console.log('exit cue id=' + cue.id);
+18. console.log('exit cue id=' + cue.id);
19. // do something else
20. };
-21. } // end of addCueListeners...
-```
+21. } // end of addCueListeners...
- Functions readContent & addCueListeners code extract!
+ JavaScript code!
-```
-1. function readContent(track) {
-2. console.log("adding enter and exit listeners to all cues of this > track");
+
1. function readContent(track) {
+2. console.log("adding enter and exit listeners to all cues of this > track");
3.
-4. trackStatusesDiv.innerHTML = "";
+4. trackStatusesDiv.innerHTML = "";
5.
6. // get the list of cues for that track
7. var cues = track.cues;
8. // iterate on them
-9. for(var i=0; i < cues.length; i++) {
+9. for(var i=0; i < cues.length; i++) {
10. // current cue
11. var cue = cues[i];
-12. addCueListeners(cue);
+12. addCueListeners(cue);</b>
13. }
14.
15. video.play();
@@ -1966,263 +1935,252 @@ shows how to use enter and exit events on cues:
17.
18. function addCueListeners(cue) {
19. cue.onenter = function(){
-20. trackStatusesDiv.innerHTML += 'entered cue id=' + this.id + " > "
-21. + this.text + " ";
+20. trackStatusesDiv.innerHTML += 'entered cue id=' + this.id + " > "
+21. + this.text + "<br>";
22. };
23. cue.onexit = function(){
-24. trackStatusesDiv.innerHTML += 'exited cue > id=' + this.id + " ";
+24. trackStatusesDiv.innerHTML += 'exited cue > id=' + this.id + "<br>";
25. };
-26. } // end of addCueListeners...
-```
+26. } // end of addCueListeners...
-
-
1.3 Advanced Features for <audio> and <video> Players
+
1.3 Advanced Features for <audio> and <video> Players
-
1.3.1 With a Clickable Transcript on the Side
+
With a Clickable Transcript on the Side
-Hi! This time, we will just go a little bit further than in the previous
-examples.
+
Hi! This time, we will just go a little bit further than in the previous examples.
-When we click on a button, we will force the loading of the track, we
-will read the content, add cue listeners in order to trigger events when
-the video is played and we will display them on the side this time, and
-there are hyperlinks you can click and if I click somewhere, the video
-starts at the corresponding location and you can see that the cues are
-highlighted in black as the video advances.
+
When we click on a button, we will force the loading of the track, we will read
+the content, add cue listeners in order to trigger events when the video is played
+and we will display them on the side this time, and there are hyperlinks you can
+click and if I click somewhere, the video starts at the corresponding location
+and you can see that the cues are highlighted in black as the video advances.
-There are not a lot of subtitles at this location but... you can see that the transcript is
-highlighted as the video is playing.
+
There are not a lot of subtitles at this location but… you can see that the
+transcript is highlighted as the video is playing.
-What have we added to the previous example?
+
What have we added to the previous example?
-The first thing is that we just defined a rectangular area here: it is just a div with an
-id="transcript", and we added some CSS here, for locating the video and the transcript on
-the same horizontal position, so that the video can grow and the transcript too.
+
The first thing is that we just defined a rectangular area here: it is just
+a div with an id=“transcript”, and we added some CSS here, for locating the video
+and the transcript on the same horizontal position, so that the video can grow
+and the transcript too.
-We use floating positions, I put ‘float:left;’ for the transcript that is on the right because if I put
-'right' it will grow but it will be aligned on the right... and I prefer it on the left.
+
We use floating positions, I put ‘float:left;’ for the transcript that is on
+the right because if I put ‘right’ it will grow but it will be aligned on the right
+… and I prefer it on the left.
-We can give a look at the CSS, there is nothing complicated and this can
-scroll because of the overflow:auto; rule we added to the div.
+
We can give a look at the CSS, there is nothing complicated and this can scroll
+because of the overflow:auto; rule we added to the div.
-When we click on the buttons here, we call a function called loadTranscript(),
-instead of forceLoadTrack(0) and forceLoadTrack(1), this time we just
-ask for a particular language and, it's implicit, but we are also
-looking for track files that are not chapters.
+
When we click on the buttons here, we call a function called loadTranscript(),
+instead of forceLoadTrack(0) and forceLoadTrack(1), this time we just ask for a
+particular language and, it’s implicit, but we are also looking for track files
+that are not chapters.
-Let's have a look at this loadTranscript() function here.
-So the loadTranscript() function has a parameter that is the language.
+
Let’s have a look at this loadTranscript() function here. So the loadTranscript()
+function has a parameter that is the language.
-The first thing we do is that we clear the div, we are just setting the content to null and then we disable all the tracks:
-we set the mode of the all the tracks ‘disabled’ because when we click here and we can change the language of the transcript, we need to disable all track and
-enable just the one we are interested in.
+
The first thing we do is that we clear the div, we are just setting the content
+to null and then we disable all the tracks: we set the mode of the all the tracks
+‘disabled’ because when we click here and we can change the language of the transcript,
+we need to disable all track and enable just the one we are interested in.
-How can we locate the right track with the language?
+
How can we locate the right track with the language?
-We just iterate on the tracks... this is the text tracks object... and we get the current track as an HTML element
-and as a TextTrack and using the TextTrack, we just check the language and the kind.
+
We just iterate on the tracks… this is the text tracks object… and we get the
+current track as an HTML element and as a TextTrack and using the TextTrack, we
+just check the language and the kind.
-And if the language is equal to the one we are looking for, and if the kind is different than chapters, then this is the track
-we would like.
+
And if the language is equal to the one we are looking for, and if the kind
+is different than chapters, then this is the track we would like.
-By forcing the mode to "showing", in case the track has not already been loaded, it will trigger (it will ask the browser) to
-load the file.
+
By forcing the mode to “showing”, in case the track has not already been loaded,
+it will trigger (it will ask the browser) to load the file.
-This is where we test if the file is already been loaded, it is exactly
-the same test we did in the previous example.
+
This is where we test if the file is already been loaded, it is exactly the
+same test we did in the previous example.
-If the track is already loaded, just display the cues in sync with the video and if the track
-has not been loaded, display the cues after the track has been loaded.
+
If the track is already loaded, just display the cues in sync with the video
+and if the track has not been loaded, display the cues after the track has been
+loaded.
-This is the same function as the getContent we had, except that we renamed it.
+
This is the same function as the getContent we had, except that we renamed it.
-This takes the track as an HTML element and the TextTrack as parameters.
+
This takes the track as an HTML element and the TextTrack as parameters.
-Let's have a look at how it is done.
+
Let’s have a look at how it is done.
-"displayCuesAfterTrackLoaded" just waits for the load event to be
-triggered and then it called display the cues function that will display
-the cues in sync.
+
“displayCuesAfterTrackLoaded” just waits for the load event to be triggered
+and then it called display the cues function that will display the cues in sync.
-Either we call it directly if the track is loaded, or we know that the
-loading has been triggered if necessary, and we just wait in the load event listener.
+
Either we call it directly if the track is loaded, or we know that the loading
+has been triggered if necessary, and we just wait in the load event listener.
-Let's have a look at what displayCues function does.
+
Let’s have a look at what displayCues function does.
-The displayCues function is exactly the same as the readContent we had
-earlier.
+
The displayCues function is exactly the same as the readContent we had earlier.
-It gets the cue list for the given track, add listeners to the track.
+
It gets the cue list for the given track, add listeners to the track.
-And instead of just displaying the plain text below the video as we did earlier, we will
-just make a nice format and we will add the id of the cue in the element we are creating.
+
And instead of just displaying the plain text below the video as we did earlier,
+we will just make a nice format and we will add the id of the cue in the element
+we are creating.
-Let's have a look... I'm calling the inspector... let's have a look at
-one of the list items here.
+
Let’s have a look… I’m calling the inspector… let’s have a look at one of the
+list items here.
-You can see that in the list item we use the CSS class called cues, just
-for the formatting, for putting them in blue and adding an underline when the
-mouse is over, and we use the id of the cue in the list item.
+
You can see that in the list item we use the CSS class called cues,
+just for the formatting, for putting them in blue and adding an
+underline when the mouse is over, and we use the id of the cue in the
+list item.
-So the id is 10, and we also created an onclick listener that calls the function
-we will detail later, that is call jumpTo.
+
So the id is 10, and we also created an onclick listener that calls
+the function we will detail later, that is call jumpTo.
-And here is the starting time of the cue.
+
And here is the starting time of the cue.
-What we did is that we created a list item with a given id and if we click on it,
-it has a click listener that will call the jumpTo with the time as the parameter.
+
What we did is that we created a list item with a given id and if we
+click on it, it has a click listener that will call the jumpTo with the
+time as the parameter.
-This is the trick, this onclick listener that will make the video jump to the right position.
+
This is the trick, this onclick listener that will make the video
+jump to the right position.
-How did we create that? We use classical techniques.
+
How did we create that? We use classical techniques.
-We created a string called clickableTransText that is just an HTML list item with the id,
-the onclick listener that is built with the start time of the cue and we
-just add this list item to the div.
+
We created a string called clickableTransText that is just an
+HTML list item with the id, the onclick listener that is built with the
+start time of the cue and we just add this list item to the div.
-This function here, addToTranscriptDiv, just adds in the DOM the HTML fragment.
+
This function here, addToTranscriptDiv, just adds in the DOM
+the HTML fragment.
-We can give a look at addToTranscriptDiv.
+
We can give a look at addToTranscriptDiv.
-It just does transcriptDiv.innerHTML += this text.
+
It just does transcriptDiv.innerHTML += this text.
-The jumpTo method that makes the video jump... in order to jump to a
+
The jumpTo method that makes the video jump… in order to jump to a
particular time in the video we are just setting the currentTime
property of the video element and we want it to start playing as soon as
-the jump is done.
+the jump is done.
-So video = document.querySelector("#myVideo") is just the video HTML element.
+
So video = document.querySelector(“#myVideo”) is just the video HTML
+element.
-I recommend to look slowly at the code, it is a bit longer because we
+
I recommend to look slowly at the code, it is a bit longer because we
added some formatting for the voices and so on, but it is not
-complicated.
-
-Take your time and look at how it's done....
+complicated.
-Foreword about the set of five examples presented in this section: the
-code of the examples is larger than usual, but each example integrates
-blocks of code already presented and detailed in the previous lessons.
+
Take your time and look at how it’s done….
-Example #1: create an accessible player with a clickable transcript of
-the video presentation.
+
Foreword about the set of five examples presented in this
+section: the code of the examples is larger than usual, but each
+example integrates blocks of code already presented and detailed in the
+previous lessons.
-It might be interesting to read the content of a track before playing
-the video.
+
Example #1: create an accessible player with a clickable transcript
+of the video presentation.
-This is what the edX video player does: it reads a single
-subtitle file and displays it as a transcript on the right.
+
It might be interesting to read the content of a track before playing
+the video.
-In the transcript, you can click on a sentence to make the video jump to the
-corresponding location.
+
This is what the edX video player does: it reads a single subtitle
+file and displays it as a transcript on the right.
-We will learn how to do this using the track API.
+
In the transcript, you can click on a sentence to make the video jump
+to the corresponding location.
+
We will learn how to do this using the track API.
-
-
-
+
-
-
-Read the WebVTT file at once using the track API and make a clickable
-transcript.
-
-Here we decided to code something similar, except that we will offer a
-choice of track/subtitle language.
-
-Our example offers English or German subtitles, and also another track that contains the chapter descriptions
-(more on that later).
-
-Using a button to select a language (track), the appropriate transcript is displayed on the right. Like the edX player,
-we can click on any sentence in order to force the video to jump to the corresponding location.
-While the video is playing, the current text is highlighted.
+
Read the WebVTT file at once using the track API and make a clickable
+transcript.
-Some important things here:
+
Here we decided to code something similar, except that we will offer
+a choice of track/subtitle language.
-1. Browsers do not load all the tracks at the same time, and the way
- they decide when and which track to load differs from one browser to
- another. So, when we click on a button to choose the track to
- display, we need to enforce the loading of the track, if it has not
- been loaded yet.
+
Our example offers English or German subtitles, and also another
+track that contains the chapter descriptions (more on that later).
-2. When a track file is loaded, then we iterate on the different cues
- and generate the transcript as a set of <li>...</li> elements.
- One <li> per cue/sentence.
+
Using a button to select a language (track), the appropriate
+transcript is displayed on the right. Like the edX player, we can click
+on any sentence in order to force the video to jump to the corresponding
+location.
-3. We define the id attribute of the <li> to be the same as
- the cue.id value. In this way, when we click on a <li> we can get
- its id and find the corresponding cue start time, and make the video
- jump to that time location.
+
While the video is playing, the current text is highlighted.
-4. We add an enter and an exit listener to each cue. These will be
- useful for highlighting the current cue. Note that these listeners
- are not yet supported by FireFox (you can use a cuechange event
- listener on a TextTrack instead - the source code for FireFox is
- commented in the example).
+
Some important things here:
-[Try this example at
-JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
+
+
Browsers do not load all the tracks at the same time, and the way they decide when and which track to load differs from one browser to another. So, when we click on a button to choose the track to display, we need to enforce the loading of the track, if it has not been loaded yet.
+
When a track file is loaded, then we iterate on the different cues and generate the transcript as a set of <li>…</li> elements. One <li> per cue/sentence.
+
We define the id attribute of the <li> to be the same as the cue.id value. In this way, when we click on a <li> we can get its id and find the corresponding cue start time, and make the video jump to that time location.
+
We add an enter and an exit listener to each cue. These will be useful for highlighting the current cue. Note that these listeners are not yet supported by FireFox (you can use a cuechange event listener on a TextTrack instead - the source code for FireFox is commented in the example).
1. var video, transcriptDiv;
2. // TextTracks, html tracks, urls of tracks
3. var tracks, trackElems, tracksURLs = [];
4. var buttonEnglish, buttonDeutsch;
5.
6. window.onload = function() {
-7. console.log("init");
+7. console.log("init");
8. // when the page is loaded, get the different DOM nodes
-9. // we're going to work with
-10. video = document.querySelector("#myVideo");
-11. transcriptDiv = document.querySelector("#transcript");
+9. // we're going to work with
+10. video = document.querySelector("#myVideo");
+11. transcriptDiv = document.querySelector("#transcript");
12.
13. // The tracks as HTML elements
-14. trackElems = document.querySelectorAll("track");
+14. trackElems = document.querySelectorAll("track");
15.
16. // Get the URLs of the vtt files
-17. for(var i = 0; i < trackElems.length; i++) {
+17. for(var i = 0; i < trackElems.length; i++) {
18. var currentTrackElem = trackElems[i];
19. tracksURLs[i] = currentTrackElem.src;
20. }
21.
-22. buttonEnglish = document.querySelector("#buttonEnglish");
-23. buttonDeutsch = document.querySelector("#buttonDeutsch");
+22. buttonEnglish = document.querySelector("#buttonEnglish");
+23. buttonDeutsch = document.querySelector("#buttonDeutsch");
24.
25. // we enable the buttons only in this load callback,
26. // we cannot click before the video is in the DOM
@@ -2319,14 +2275,14 @@ JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
42. disableAllTracks();
43.
44. // Locate the track with language = lang
-45. for(var i = 0; i < tracks.length; i++) {
+45. for(var i = 0; i < tracks.length; i++) {
46. // current track
47. var track = tracks[i];
48. var trackAsHtmlElem = trackElems[i];
49.
50. // Only subtitles/captions are ok for this example...
-51. if((track.language === lang) && (track.kind !== "chapters")) {
-52. track.mode="showing";
+51. if((track.language === lang) && (track.kind !== "chapters")) {
+52. track.mode="showing";
53.
54. if(trackAsHtmlElem.readyState === 2) {
55. // the track has already been loaded
@@ -2335,14 +2291,14 @@ JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
58. displayCuesAfterTrackLoaded(trackAsHtmlElem, track);
59. }
60.
-61. / Fallback for FireFox that still does not implement cue enter and exit events
-62. track.addEventListener("cuechange", function(e) {
+61. /<i> Fallback for FireFox that still does not implement cue enter and exit events
+62. track.addEventListener("cuechange", function(e) {
63. var cue = this.activeCues[0];
-64. console.log("cue change");
+64. console.log("cue change");
65. var transcriptText = document.getElementById(cue.id);
-66. transcriptText.classList.add("current");
+66. transcriptText.classList.add("current");
67. });
-68. /
+68. </i>/
69. }
70. }
71. }
@@ -2351,16 +2307,16 @@ JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
74. // Create a listener that will only be called once the track has
75. // been loaded. We cannot display the transcript before
76. // the track is loaded
-77. trackElem.addEventListener('load', function(e) {
-78. console.log("track loaded");
+77. trackElem.addEventListener('load', function(e) {
+78. console.log("track loaded");
79. displayCues(track);
80. });
81. }
82.
83. function disableAllTracks() {
-84. for(var i = 0; i < tracks.length; i++)
+84. for(var i = 0; i < tracks.length; i++)
85. // the track mode is important: disabled tracks do not fire events
-86. tracks[i].mode = "disabled";
+86. tracks[i].mode = "disabled";
87. }
88.
89. function displayCues(track) {
@@ -2368,25 +2324,25 @@ JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
91. var cues = track.cues;
92.
93. // iterate on all cues of the current track
-94. for(var i=0, len = cues.length; i < len; i++) {
+94. for(var i=0, len = cues.length; i < len; i++) {
95. // current cue, also add enter and exit listeners to it
96. var cue = cues[i];
97. addCueListeners(cue);
98.
-99. // Test if the cue content is a voice ....
+99. // Test if the cue content is a voice <v speaker>....</v>
100. var voices = getVoices(cue.text);
-101. var transText="";
-102. if (voices.length > 0) {
-103. for (var j = 0; j < voices.length; j++) { // how many voices?
-104. transText += voices[j].voice + ': ' + removeHTML(voices[j].text);
+101. var transText="";
+102. if (voices.length > 0) {
+103. for (var j = 0; j < voices.length; j++) { // how many voices?
+104. transText += voices[j].voice + ': ' + removeHTML(voices[j].text);
105. }
106. } else
107. transText = cue.text; // not a voice text
108.
-109. var clickableTransText = "
"
-112. + transText + "
";
+109. var clickableTransText = "<li class='cues' id=" + cue.id
+110. + " onclick='jumpTo("
+111. + cue.startTime + ");'" + ">"
+112. + transText + "</li>";
113.
114. addToTranscriptDiv(clickableTransText);
115. }
@@ -2395,26 +2351,26 @@ JSBin](https://jsbin.com/sodihux/1/edit?html,css,js,output):
118. function getVoices(speech) {
119. // takes a text content and check if there are voices
120. var voices = []; // inside
-121. var pos = speech.indexOf(' ....
+121. var pos = speech.indexOf('<v'); // voices are like <v Michel> ....
122. while (pos != -1) {
-123. endVoice = speech.indexOf('>');
+123. endVoice = speech.indexOf('>');
124. var voice = speech.substring(pos + 2, endVoice).trim();
-125. var endSpeech = speech.indexOf('');
+125. var endSpeech = speech.indexOf('</v>');
126. var text = speech.substring(endVoice + 1, endSpeech);
127. voices.push({
-128. 'voice': voice,
-129. 'text': text
+128. 'voice': voice,
+129. 'text': text
130. });
131. speech = speech.substring(endSpeech + 4);
-132. pos = speech.indexOf('English
-15.
-16.
-17.
-18. ...
-```
+13. <h3>Video Transcript</h3>
+14. <button onclick="loadTranscript('en');">English</button>
+15. <button onclick="loadTranscript('de');">Deutsch</button>
+16. </div>
+17. <div id="transcript"></div>
+18. ...
1. // Transcript.js, by dev.opera.com
2. function loadTranscript(lang) {
-3. var url = "https://mainline.i3s.unice.fr/mooc/" +
-4. 'elephants-dream-subtitles-' + lang + '.vtt';
+3. var url = "https://mainline.i3s.unice.fr/mooc/" +
+4. 'elephants-dream-subtitles-' + lang + '.vtt';
5.
6. // Will download using Ajax + extract subtitles/captions
7. loadTranscriptFile(url);
@@ -2526,20 +2479,20 @@ element with an id of transcript.
11. // Using Ajax/XHR2 (explained in detail in Module 3)
12. var reqTrans = new XMLHttpRequest();
13.
-14. reqTrans.open('GET', webvttFileUrl);
+14. reqTrans.open('GET', webvttFileUrl);
15.
16. // callback, called only once the response is ready
17. reqTrans.onload = function(e) {
18.
19. var pattern = /^([0-9]+)$/;
-20. var patternTimecode = /^([0-9]{2}:[0-9]{2}:[0-9]{2}[,.]{1}[0-9]{3}) --> ([0-9]
-21. {2}:[0-9]{2}:[0-9]{2}[,.]{1}[0-9]{3})(.)$/;
+20. var patternTimecode = /^([0-9]{2}:[0-9]{2}:[0-9]{2}[,.]{1}[0-9]{3}) --> ([0-9]
+21. {2}:[0-9]{2}:[0-9]{2}[,.]{1}[0-9]{3})(.</i>)$/;
22.
23. var content = this.response; // content of the webVTT file
24.
25. var lines = content.split(/r?n/); // Get an array of text lines
-26. var transcript = '';
-27. for (i = 0; i < lines.length; i++) {
+26. var transcript = '';
+27. for (i = 0; i < lines.length; i++) {
28. var identifier = pattern.exec(lines[i]);
29.
30. // is there an id for this line, if it is, go to next line
@@ -2547,37 +2500,37 @@ element with an id of transcript.
32. i++;
33. var timecode = patternTimecode.exec(lines[i]);
34. // is the current line a timecode?
-35. if (timecode && i < lines.length) {
+35. if (timecode && i < lines.length) {
36. // if it is go to next line
37. i++;
38. // it can only be a text line now
39. var text = lines[i];
40.
41. // is the text multiline?
-42. while (lines[i] !== '' && i < lines.length) {
-43. text = text + 'n' + lines[i];
+42. while (lines[i] !== '' && i < lines.length) {
+43. text = text + 'n' + lines[i];
44. i++;
45. }
46.
-47. var transText = '';
+47. var transText = '';
48. var voices = getVoices(text);
49. // is the extracted text multi voices ?
-50. if (voices.length > 0) {
+50. if (voices.length > 0) {
51. // how many voices ?
-52. for (var j = 0; j < voices.length; j++) {
-53. transText += voices[j].voice + ': '
+52. for (var j = 0; j < voices.length; j++) {
+53. transText += voices[j].voice + ': '
54. + removeHTML(voices[j].text)
-55. + ' ';
+55. + '<br />';
56. }
57. } else
58. // not a voice text
-59. transText = removeHTML(text) + ' ';
+59. transText = removeHTML(text) + '<br />';
60.
61. transcript += transText;
62. }
63. }
64.
-65. var oTrans = document.getElementById('transcript');
+65. var oTrans = document.getElementById('transcript');
66. oTrans.innerHTML = transcript;
67. }
68. };
@@ -2586,317 +2539,300 @@ element with an id of transcript.
71.
72. function getVoices(speech) { // takes a text content and check if there are voices
73. var voices = []; // inside
-74. var pos = speech.indexOf(' ....
+74. var pos = speech.indexOf('<v'); // voices are like <v Michel> ....
75.
76. while (pos != -1) {
-77. endVoice = speech.indexOf('>');
+77. endVoice = speech.indexOf('>');
78. var voice = speech.substring(pos + 2, endVoice).trim();
-79. var endSpeech = speech.indexOf('');
+79. var endSpeech = speech.indexOf('</v>');
80. var text = speech.substring(endVoice + 1, endSpeech);
81. voices.push({
-82. 'voice': voice,
-83. 'text': text
+82. 'voice': voice,
+83. 'text': text
84. });
85. speech = speech.substring(endSpeech + 4);
-86. pos = speech.indexOf('
-
-
1.3.2 Captions, Descriptions, Chapters and Metadata
+
1.3.2 Captions, Descriptions, Chapters and Metadata
-Example #2: showing video description while playing, listening to
-events, changing the mode of a track.
-
-Each track has a mode property (and a mode attribute) that can
-be: "disabled", "hidden" or "showing". More than one track at a
-time can be in any of these states. The difference
-between "hidden" and "disabled" is that hidden tracks can fire
-events (more on that at the end of the first example) whereas disabled
-tracks do not fire events.
+
Example #2: showing video description while playing, listening to
+events, changing the mode of a track.
+
Each track has a mode property (and a mode attribute) that can
+be: “disabled”, “hidden” or “showing”. More than
+one track at a time can be in any of these states. The difference
+between “hidden” and “disabled” is that hidden tracks can
+fire events (more on that at the end of the first example) whereas
+disabled tracks do not fire events.
-
+
-
-
-
-
+
-[Here is an example at
-JSBin](https://jsbin.com/bixoru/1/edit?html,css,js,output) that shows
-the use of the mode property, and how to listen for cue events in
-order to capture the current subtitle/caption from JavaScript. You can
-change the mode of each track in the video element by clicking on its
-button. This will toggle the mode of that track. All tracks with
-mode="showing" or mode="hidden" will have the content of their cues
-displayed in real time in a small area below the video.
+
Here is
+an example at JSBin that shows the use of the mode property,
+and how to listen for cue events in order to capture the current
+subtitle/caption from JavaScript. You can change the mode of each track
+in the video element by clicking on its button. This will toggle the
+mode of that track. All tracks with mode=“showing” or mode=“hidden” will
+have the content of their cues displayed in real time in a small area
+below the video.
-In the screen-capture below, we have a WebVTT file displaying a scene's
-captions and descriptions.
+
In the screen-capture below, we have a WebVTT file displaying a
+scene’s captions and descriptions.
- Functions init, displayTrackStatus, appendToScrollableDiv, clearDiv, clearSubtitlesCaptions, toggleTrack, logCue extract!
-
-```
-var tracks, video, statusDiv, subtitlesCaptionsDiv;
-
-function init() {
- video = document.querySelector("#myVideo");
- statusDiv = document.querySelector("#currentTrackStatuses");
- subtitlesCaptionsDiv = document.querySelector("#subtitlesCaptions");
- tracks = document.querySelectorAll("track");
-
- video.addEventListener('loadedmetadata', function() {
- console.log("metadata loaded");
-
- // defines cue listeners for the active track; we can do this only after the
- // video metadata have been loaded
- for(var i=0; i" + t.mode + "";
- } else if(mode === "showing") {
- mode = "" + t.mode + "";
- }
- appendToScrollableDiv(statusDiv, "track " + i + ":" + t.label
- + " " + t.kind+" in "
- + mode + " mode");
- }
-}
-function appendToScrollableDiv(div, text) {
- // we've got two scrollable divs. This function
- // appends text to the div passed as a parameter
- // The div is scrollable (thanks to CSS overflow:auto)
- var inner = div.innerHTML;
- div.innerHTML = inner + text + " ";
- // Make it display the last line appended
- div.scrollTop = div.scrollHeight;
-}
-
-function clearDiv(div) {
- div.innerHTML = '';
-}
-
-function clearSubtitlesCaptions() {
- clearDiv(subtitlesCaptionsDiv);
-}
-
-function toggleTrack(i) {
- // toggles the mode of track i, removes the cue listener
- // if its mode becomes "disabled"
- // adds a cue listener if its mode was "disabled"
- // and becomes "hidden"
- var t = tracks[i].track;
- switch (t.mode) {
- case "disabled":
- t.addEventListener('cuechange', logCue, false);
- t.mode = "hidden";
- break;
- case "hidden":
- t.mode = "showing";
- break;
- case "showing":
- t.removeEventListener('cuechange', logCue, false);
- t.mode = "disabled";
- break;
- }
- // updates the status
- clearDiv(statusDiv);
- displayTrackStatus();
- appendToScrollableDiv(statusDiv," " + t.label+" are now " +t.mode);
-}
-
-function logCue() {
- // callback for the cue event
- if(this.activeCues && this.activeCues.length) {
- var t = this.activeCues[0].text; // text of current cue
- appendToScrollableDiv(subtitlesCaptionsDiv, "Active "
- + this.kind + " changed to: " + t);
- }
-}
-```
+ JavaScript code!
+
+
1. var tracks, video, statusDiv, subtitlesCaptionsDiv;
+2.
+3. function init() {
+4. video = document.querySelector("#myVideo");
+5. statusDiv = document.querySelector("#currentTrackStatuses");
+6. subtitlesCaptionsDiv = document.querySelector("#subtitlesCaptions");
+7. tracks = document.querySelectorAll("track");
+8.
+9. video.addEventListener('loadedmetadata', function() {
+10. console.log("metadata loaded");
+11.
+12. // defines cue listeners for the active track; we can do this only after the video metadata have been loaded
+13. for(var i=0; i<tracks.length; i++) {
+14. var t = tracks[i].track;
+15. if(t.mode === "showing") {
+16. t.addEventListener('cuechange', logCue, false);
+17. }
+18. }
+19. // display in a div the list of tracks and their status/mode value
+20. displayTrackStatus();
+21. });
+22. }
+23.
+24. function displayTrackStatus() {
+25. // display the status / mode value of each track.
+26. // In red if disabled, in green if showing
+27. for(var i=0; i<tracks.length; i++) {
+28. var t = tracks[i].track;
+29. var mode = t.mode;
+30.
+31. if(mode === "disabled") {
+32. mode = "<span style='color:red'>" + t.mode + "</span>";
+33. } else if(mode === "showing") {
+34. mode = "<span style='color:green'>" + t.mode + "</span>";
+35. }
+36. appendToScrollableDiv(statusDiv, "track " + i + ":" + t.label
+37. + " " + t.kind+" in "
+38. + mode + " mode");
+39. }
+40. }
+41. function appendToScrollableDiv(div, text) {
+42. // we've got two scrollable divs. This function
+43. // appends text to the div passed as a parameter
+44. // The div is scrollable (thanks to CSS overflow:auto)
+45. var inner = div.innerHTML;
+46. div.innerHTML = inner + text + "<br/>";
+47. // Make it display the last line appended
+48. div.scrollTop = div.scrollHeight;
+49. }
+50.
+51. function clearDiv(div) {
+52. div.innerHTML = '';
+53. }
+54.
+55. function clearSubtitlesCaptions() {
+56. clearDiv(subtitlesCaptionsDiv);
+57. }
+58.
+59. function toggleTrack(i) {
+60. // toggles the mode of track i, removes the cue listener
+61. // if its mode becomes "disabled"
+62. // adds a cue listener if its mode was "disabled"
+63. // and becomes "hidden"
+64. var t = tracks[i].track;
+65. switch (t.mode) {
+66. case "disabled":
+67. t.addEventListener('cuechange', logCue, false);
+68. t.mode = "hidden";
+69. break;
+70. case "hidden":
+71. t.mode = "showing";
+72. break;
+73. case "showing":
+74. t.removeEventListener('cuechange', logCue, false);
+75. t.mode = "disabled";
+76. break;
+77. }
+78. // updates the status
+79. clearDiv(statusDiv);
+80. displayTrackStatus();
+81. appendToScrollableDiv(statusDiv,"<br>" + t.label+" are now " +t.mode);
+82. }
+83.
+84. function logCue() {
+85. // callback for the cue event
+86. if(this.activeCues && this.activeCues.length) {
+87. var t = this.activeCues[0].text; // text of current cue
+88. appendToScrollableDiv(subtitlesCaptionsDiv, "Active "
+89. + this.kind + " changed to: " + t);
+90. }
+91. }
-
1.3.3 With Buttons for Choosing the Subtitle Language
+
1.3.3 With Buttons for Choosing the Subtitle Language
-Example #3: adding buttons for choosing the subtitle/caption track
+
Example #3: adding buttons for choosing the subtitle/caption track
-You might have noticed that with some browsers, before 2018, the
-standard implementation of the video element did not let the user choose
-the subtitle language. Now, recent browsers offers a menu to choose the
-track to display.
+
You might have noticed that with some browsers, before 2018, the standard implementation of the video element did not let the user choose the subtitle language. Now, recent browsers offers a menu to choose the track to display.
-However, before it was available, it was easy to implement this feature
-using the Track API.
+
However, before it was available, it was easy to implement this feature using the Track API.
-[Here is a simple example at
-JSBin](https://jsbin.com/balowuq/1/edit?html,css,js,output): we added
-two buttons below the video to enable/disable subtitles/captions and let
-you choose which track you prefer.
+
Here is a simple example at JSBin: we added two buttons below the video to enable/disable subtitles/captions and let you choose which track you prefer.
-
-
-
-
+
+
+
HTML code:
- HTML code extract!
+ HTML code!
-```
-1. ...
-2.
+
1. var langButtonDiv, currentLangSpan, video;
2.
3. function init() {
-4. langButtonDiv = document.querySelector("#langButtonDiv");
-5. currentLangSpan = document.querySelector("#currentLang");
-6. video = document.querySelector("#myVideo");
+4. langButtonDiv = document.querySelector("#langButtonDiv");
+5. currentLangSpan = document.querySelector("#currentLang");
+6. video = document.querySelector("#myVideo");
7.
-8. console.log("Number of tracks = "
+8. console.log("Number of tracks = "
9. + video.textTracks.length);
10. // Updates the display of the current track activated
11. currentLangSpan.innerHTML = activeTrack();
@@ -2905,22 +2841,22 @@ you choose which track you prefer.
14. }
15.
16. function activeTrack() {
-17. for (var i = 0; i < video.textTracks.length; i++) {
-18. if(video.textTracks[i].mode === 'showing') {
-19. return video.textTracks[i].label + " ("
-20. + video.textTracks[i].language + ")";
+17. for (var i = 0; i < video.textTracks.length; i++) {
+18. if(video.textTracks[i].mode === 'showing') {
+19. return video.textTracks[i].label + " ("
+20. + video.textTracks[i].language + ")";
21. }
22. }
-23. return "no subtitles/caption selected";
+23. return "no subtitles/caption selected";
24. }
25.
26. function buildButtons() {
27. if (video.textTracks) { // if the video contains track elements
28. // For each track, create a button
-29. for (var i = 0; i < video.textTracks.length; i++) {
+29. for (var i = 0; i < video.textTracks.length; i++) {
30. // We create buttons only for the caption and subtitle tracks
31. var track = video.textTracks[i];
-32. if((track.kind !== "subtitles") && (track.kind !== "captions"))
+32. if((track.kind !== "subtitles") && (track.kind !== "captions"))
33. continue;
34.
35. // create a button for track number i
@@ -2931,21 +2867,21 @@ you choose which track you prefer.
40.
41. function createButton(track) {
42. // Create a button
-43. var b = document.createElement("button");
+43. var b = document.createElement("button");
44. b.value=track.label;
45. // use the lang attribute of the button to keep trace of the
46. // associated track language. Will be useful in the click listener
-47. b.setAttribute("lang", track.language);
-48. b.addEventListener('click', function(e) {
-49. // Check which track is the track with the language we're looking for
+47. b.setAttribute("lang", track.language);
+48. b.addEventListener('click', function(e) {
+49. // Check which track is the track with the language we're looking for
50. // Get the value of the lang attribute of the clicked button
-51. var lang = this.getAttribute('lang');
+51. var lang = this.getAttribute('lang');
52.
-53. for (var i = 0; i < video.textTracks.length; i++) {
+53. for (var i = 0; i < video.textTracks.length; i++) {
54. if (video.textTracks[i].language == lang) {
-55. video.textTracks[i].mode = 'showing';
+55. video.textTracks[i].mode = 'showing';
56. } else {
-57. video.textTracks[i].mode = 'hidden';
+57. video.textTracks[i].mode = 'hidden';
58. }
59. }
60. // Updates the span so that it displays the new active track
@@ -2956,235 +2892,193 @@ you choose which track you prefer.
65. // Add the button to a div at the end of the HTML document
66. langButtonDiv.appendChild(b);
67. }
-68.
-```
+68.
-
External resources
-
-- If you are interested in building a complete custom video player,
- MDN offers an online tutorial with further information
- about [styling and integrating a "CC" button](https://developer.mozilla.org/en-US/Apps/Build/Audio_and_video_delivery/Adding_captions_and_subtitles_to_HTML5_video).
-
-- The MDN documentation on [Web Video Text Tracks
- Format](https://developer.mozilla.org/fr/docs/Web/API/WebVTT_API) (WebVTT).
+
External resources
+
+
If you are interested in building a complete custom video player, MDN offers an online tutorial with further information about styling and integrating a “CC” button.
-Example #4: making a simple chapter navigation menu
+
Example #4: making a simple chapter navigation menu
-
-
-
-
+
+
+
-We can use WebVTT files to define chapters. The syntax is exactly the
-same as for subtitles/caption .vtt files. The only difference is in the
-declaration of the track. Here is how we declared a chapter track in one
-of the previous examples (in bold in the example below):
+
We can use WebVTT files to define chapters. The syntax is exactly the same as for subtitles/caption .vtt files. The only difference is in the
+declaration of the track. Here is how we declared a chapter track in one of the previous examples (in bold in the example below):
HTML code:
- HTML code extract!
-
-```
-
-```
-
-
-
-If we try this code in an HTML document, nothing special happens. No
-magic menu, no extra button!
+ HTML code!
-Currently, no browser takes chapter tracks into account. You could use
-one of the enhanced video players presented during the HTML5 Part 1
-course, but as you will see in this lesson: making your own chapter
-navigation menu is not complicated.
+
-Let's start by examining the sample .vtt file:
+
+
If we try this code in an HTML document, nothing special happens. No magic menu, no extra button!
+
Currently, no browser takes chapter tracks into account. You could use one of the enhanced video players presented during the HTML5 Part 1 course, but as you will see in this lesson: making your own chapter navigation menu is not complicated.
-There are 7 cues (one for each chapter). Each cue id is the word
-"chapter-" followed by the chapter number, then we have the start and
-end time of the cue/chapter, and the cue content. In this case: the
-description of the chapter ("Introduction", "Watch out!", "Let's go",
-etc...).
+
There are 7 cues (one for each chapter). Each cue id is the word “chapter-” followed by the chapter number, then we have the start and end time of the cue/chapter, and the cue content. In this case: the description of the chapter (“Introduction”, “Watch out!”, “Let’s go”, etc…).
-Hmm... let's try to open this chapter track with [the example we wrote
-in a previous lesson - the one that displayed the clickable transcript
-for subtitles/captions on the right of the
-video](https://jsbin.com/zeqoleq/1/edit?html,css,js,output). We need to
-modify it a little bit:
+
We modify the loadTranscript function from the previous example, so that it matches both the srclang and the kind attribute of the track.
-Here is a new version:
-
-```
-function loadTranscript(lang, kind) {
- ...
- // Locate the track with lang and kind that match the parameters
- for(var i = 0; i < tracks.length; i++) {
- ...
- if((track.language === lang) && (track.kind === kind)) {
- // display it contents...
- }
- }
-}
-```
+
Here is a new version:
-Simple approach: chapters as clickable text on the right of the video.
+
1. function loadTranscript(lang, kind) {
+2. ...
+3. // Locate the track with lang and kind that match the parameters
+4. for(var i = 0; i < tracks.length; i++) {
+5. ...
+6. if((track.language === lang) && (track.kind === kind)) {
+7. // display it contents...
+8. }
+9. }
+10. }
-Try it on JSBin; this
-version includes the modifications we presented earlier - nothing more.
-Notice that we kept the existing buttons to display a clickable transcript:
+
Simple approach: chapters as clickable text on the right of the video.
+
Try it on JSBin; this version includes the modifications we presented earlier - nothing more. Notice that we kept the existing buttons to display a clickable transcript:
-
-
-
-
-
-Look at the JavaScript and HTML tab of the JSBin example to see the
-source code. It's the same as in the clickable transcript example,
-except for the small changes we explained earlier.
+
+
+
-Chapter navigation, illustrated in the video player below, is fairly
-popular.
+
Look at the JavaScript and HTML tab of the JSBin example to see the source code. It’s the same as in the clickable transcript example, except for the small changes we explained earlier.
+
Chapter navigation, illustrated in the video player below, is fairly popular.
-
-
-
-
-
-In addition to the clickable chapter list, this one displays an enhanced
-progress bar created using a canvas. The small squares are drawn
-corresponding to the chapter cues' start and end times. You could modify
-the code provided, in order to add such an enhanced progress indicator.
+
+
+
-However, we will see how we can do better by using JSON objects as cue
-contents. This will be the topic of the next two lessons!
+
In addition to the clickable chapter list, this one displays an enhanced progress bar created using a canvas. The small squares are drawn corresponding to the chapter cues’ start and end times. You could modify the code provided, in order to add such an enhanced progress indicator.
+
However, we will see how we can do better by using JSON objects as cue contents. This will be the topic of the next two lessons!
-
1.3.5 With Thumbnails, Using JSON Cues
+
1.3.5 With Thumbnails, Using JSON Cues
-Example #5: create a chapter menu with image thumbnails.
+
Example #5: create a chapter menu with image thumbnails.
-Instead of using text (optionally using HTML for styling, multi lines,
-etc.), it is also possible to use JSON objects as cue values that can be
-manipulated from JavaScript. JSON means "JavaScript Object Notation".
-It's an open standard for describing JavaScript objects as plain text.
+
Instead of using text (optionally using HTML for styling, multi lines, etc.), it is also possible to use JSON objects as cue values that
+can be manipulated from JavaScript. JSON means “JavaScript Object Notation”. It’s an open standard for describing JavaScript objects as
+plain text.
-Here is an example cue from a WebVTT file encoded as JSON instead of
-plain text. JSON is useful for describing "structured data", and
-processing such data from JavaScript is easier than parsing plain text.
+
Here is an example cue from a WebVTT file encoded as JSON instead of plain text. JSON is useful for describing “structured data”, and
+processing such data from JavaScript is easier than parsing plain text.
1. WEBVTT
2. Wikipedia
-3. 00:01:15.200 --> 00:02:18.800
+3. 00:01:15.200 --> 00:02:18.800
4. {
-5. "title": "State of Wikipedia",
-6. "description": "Jimmy Wales talking ...",
-7. "src": "https://upload.wikimedia.org/...../120px-Wikipedia-logo-
-8. v2.svg.webp",
-9. "href": "https://en.wikipedia.org/wiki/Wikipedia"
-10. }
-```
+5. "title": "State of Wikipedia",
+6. "description": "Jimmy Wales talking ...",
+7. "src": "https://upload.wikimedia.org/...../120px-Wikipedia-logo-
+8. v2.svg.png",
+9. "href": "https://en.wikipedia.org/wiki/Wikipedia"
+10. }
-This JSON object is a JavaScript object encoded as a
-text string. If we listen for cue events or if we read a WebVTT file as
-done in previous examples, we can extract this text content using the
- cue.text property. For example:
-
+
This JSON object is a JavaScript object encoded as a text string. If we listen for cue events or if we read a WebVTT file as done in previous
+examples, we can extract this text content using the cue.text property. For example:
1. var videoElement = document.querySelector("#myvideo");
2. var textTracks = videoElement.textTracks; // one for each track element
3. var textTrack = textTracks[0]; // corresponds to the first track element
4.
@@ -3195,181 +3089,167 @@ done in previous examples, we can extract this text content using the
9. // to a real JavaScript object
10. var obj = JSON.parse(cue.text);
11.
-12. var title = obj.title; // "State of Wikipedia"
+12. var title = obj.title; // "State of Wikipedia"
13. var description = obj.description; // Jimmy Wales talking...
-14. etc...
-```
+14. etc...
-This is a powerful way of embedding metadata, especially when used in
-conjunction with listening for cue and track events.
-
-Improved approach: make a nicer chapter menu by embedding a richer
-description of chapter markers
-
-Earlier we saw [an example that could display chapter markers as
-clickable text on the right of a
-video](https://jsbin.com/jiyodit/edit?html,css,js,output).
-
+
This is a powerful way of embedding metadata, especially when used in conjunction with listening for cue and track events.
+
Improved approach: make a nicer chapter menu by embedding a richer description of chapter markers
-We used this example to manually capture the images from the video that
-correspond to each of the seven chapters:
-
-- We clicked on each chapter link on the right, then paused the video,
-
-- then we used a screen capture tool to grab each image that
- corresponds to the beginning of chapter,
-
-- Finally, we resized the images with Photoshop to approximately
- 200x400 pixels.
-
-(For advanced users: it's possible to semi-automatize this process using
-the ffmepg command line tool, see for example; this and that).
-
-Here are the images which correspond to the seven chapters of the video
-from the previous example:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
We used this example to manually capture the images from the video that correspond to each of the seven chapters:
-To associate these images with its chapter description, we will use JSON
-objects as cue contents:
+
+
We clicked on each chapter link on the right, then paused the video,
+
then we used a screen capture tool to grab each image that corresponds to the beginning of chapter,
+
Finally, we resized the images with Photoshop to approximately 200x400 pixels.
-Before explaining the code, we propose that you [try this example at
-JSBin that uses this new .vtt
-file](https://jsbin.com/pulefe/1/edit?html,css,js,output):
-
+
-It's the same code we had in the first example, except that this time we
-use a new WebVTT file that uses JSON cues to describe each chapter. For
-the sake of simplicity, we also removed the buttons and all the code for
-displaying a clickable transcript of the subtitles/captions on the right
-of the video.
+
It’s the same code we had in the first example, except that this time
+we use a new WebVTT file that uses JSON cues to describe each chapter.
+For the sake of simplicity, we also removed the buttons and all the code
+for displaying a clickable transcript of the subtitles/captions on the
+right of the video.
1. var video, chapterMenuDiv;
2. var tracks, trackElems, tracksURLs = [];
3.
4. window.onload = function() {
-5. console.log("init");
+5. console.log("init");
6. // When the page is loaded
-7. video = document.querySelector("#myVideo");
-8. chapterMenuDiv = document.querySelector("#chapterMenu");
+7. video = document.querySelector("#myVideo");
+8. chapterMenuDiv = document.querySelector("#chapterMenu");
9.
10. // Get the tracks as HTML elements
-11. trackElems = document.querySelectorAll("track");
-12. for(var i = 0; i < trackElems.length; i++) {
+11. trackElems = document.querySelectorAll("track");
+12. for(var i = 0; i < trackElems.length; i++) {
13. var currentTrackElem = trackElems[i];
14. tracksURLs[i] = currentTrackElem.src;
15. }
@@ -3377,20 +3257,20 @@ of the video.
17. // Get the tracks as JS TextTrack objects
18. tracks = video.textTracks;
19.
-20. // Build the chapter navigation menu for the given lang and kind
-21. buildChapterMenu('en', 'chapters');
+20. // Build the chapter navigation menu for the given lang and kind</b>
+21. buildChapterMenu('en', 'chapters');</b>
22. };
23.
24. function buildChapterMenu(lang, kind) {
-25. // Locate the track with language = lang and kind="chapters"
-26. for(var i = 0; i < tracks.length; i++) {
+25. // Locate the track with language = lang and kind="chapters"
+26. for(var i = 0; i < tracks.length; i++) {
27. // current track
28. var track = tracks[i];
29. var trackAsHtmlElem = trackElems[i];
30.
-31. if((track.language === lang) && (track.kind === kind)) {
+31. if((track.language === lang) && (track.kind === kind)) {
32. // the track must be active, otherwise it will not load
-33. track.mode="showing"; // "hidden" would work too
+33. track.mode="showing"; // "hidden" would work too
34.
35. if(trackAsHtmlElem.readyState === 2) {
36. // the track has already been loaded
@@ -3406,27 +3286,27 @@ of the video.
46. var cues = track.cues;
47.
48. // We must not see the cues on the video
-49. track.mode = "hidden";
+49. track.mode = "hidden";
50.
51. // Iterate on cues
-52. for(var i=0, len = cues.length; i < len; i++) {
+52. for(var i=0, len = cues.length; i < len; i++) {
53. var cue = cues[i];
54.
-55. var cueObject = JSON.parse(cue.text);
-56. var description = cueObject.description;
-57. var imageFileName = cueObject.image;
-58. var imageURL = "https://mainline.i3s.unice.fr/mooc/" + imageFileName;
+55. var cueObject = JSON.parse(cue.text);</b>
+56. var description = cueObject.description;</b>
+57. var imageFileName = cueObject.image;</b>
+58. var imageURL = "https://mainline.i3s.unice.fr/mooc/" + imageFileName;
59.
-60. // Build the marker. It's a figure with an img and a figcaption inside.
+60. // Build the marker. It's a figure with an img and a figcaption inside.
61. // The img has an onclick listener that will make the video jump
62. // to the start time of the current cue/chapter
-63. var figure = document.createElement('figure');
-64. figure.classList.add("img");
+63. var figure = document.createElement('figure');
+64. figure.classList.add("img");
65.
-66. figure.innerHTML = ""
-69. + description + "";
+66. figure.innerHTML = "<img onclick='jumpTo("
+67. + cue.startTime + ");' class='thumb' src='"
+68. + imageURL + "'><figcaption class='desc'>"
+69. + description + "</figcaption></figure>";
70. // Add the figure to the chapterMenuDiv
71. chapterMenuDiv.insertBefore(figure, null);
72. }
@@ -3435,8 +3315,8 @@ of the video.
75. function displayChapterMarkersAfterTrackLoaded(trackElem, track) {
76. // Create a listener that will only be called when the track has
77. // been loaded
-78. trackElem.addEventListener('load', function(e) {
-79. console.log("chapter track loaded");
+78. trackElem.addEventListener('load', function(e) {
+79. console.log("chapter track loaded");
80. displayChapterMarkers(track);
81. });
82. }
@@ -3445,59 +3325,52 @@ of the video.
85. video.currentTime = time;
86. video.play();
87. }
-88.
-```
+88.
Explanations:
-- Lines 4-18: when the page is loaded, we assemble all of the track
- HTML elements and their corresponding TextTrack objects.
-
-- Line 19: using that we can build the chapter navigation menu. All
- is done in the window.onload callback, so nothing happens until the
- DOM is ready.
-
-- Lines 24-43: the buildChapterMenu function first locates the
- chapter track for the given language, then checks if this track has
- been loaded by the browser. Once it has been confirmed that the
- track is loaded, the function displayChapters is called.
-
-- Lines 45-65: the displayChapters(track) function will iterate over
- all of the cues within the chapter track passed as its parameter.
- For each cue, the JSON content is re-formatted back into a
- JavaScript object (line 55) and the image filename and description
- of the chapter/cue are extracted (lines 56-57). Then an HTML
- description for the chapter is built and added to the div element
- with id=chapterMenu.
-
+
+
Lines 4-18: when the page is loaded, we assemble all of the track
+ HTML elements and their corresponding TextTrack objects.
+
Line 19: using that we can build the chapter navigation menu. All
+ is done in the window.onload callback, so nothing happens until the DOM is
+ ready.
+
Lines 24-43: the buildChapterMenu function first locates the chapter
+ track for the given language, then checks if this track has been loaded by the
+ browser. Once it has been confirmed that the track is loaded, the function
+ displayChapters is called.
+
Lines 45-65: the displayChapters(track) function will iterate over
+ all of the cues within the chapter track passed as its parameter. For each cue,
+ the JSON content is re-formatted back into a JavaScript object (line 55)
+ and the image filename and description of the chapter/cue are extracted
+ (lines 56-57). Then an HTML description for the chapter is built and
+ added to the div element with id=chapterMenu.
-Notice that we add a click listener to each thumbnail image. Clicking a
-chapter thumbnail will cause the video to jump to the chapter time
-location (the example above is for the first chapter with start time =
-0).
+
Notice that we add a click listener to each thumbnail image. Clicking a chapter
+thumbnail will cause the video to jump to the chapter time location (the example
+above is for the first chapter with start time = 0).
-We also added CSS classes "img", "thumb" and "desc", which make it easy
-to style and position the thumbnails using CSS.
+
We also added CSS classes “img”, “thumb” and “desc”, which make it easy to style
+and position the thumbnails using CSS.
1. #chapterMenuSection {
2. background-color: lightgrey;
3. border-radius:10px;
4. padding: 20px;
@@ -3528,137 +3401,134 @@ to style and position the thumbnails using CSS.
29.
30. .thumb:hover {
31. box-shadow: 5px 5px 5px black;
-32. }
-```
+32. }
-A sample menu marker is shown below (it's also animated - hover your
-mouse over the thumbnail to see its animated shadow):
-
+
A sample menu marker is shown below (it’s also animated - hover your mouse over
+the thumbnail to see its animated shadow):
-
-
-
-
+
+
Introduction
-Combining techniques: a clickable transcript and a chapter menu
-
-This example is the same as the previous one except that we have kept
-the features that we saw previously: the buttons for displaying a
-clickable transcript. The code is longer, but it's just a combination of
-the "clickable transcript" example from the previous lesson, and the
-code from earlier in this lesson.
+
Combining techniques: a clickable transcript and a chapter menu
This example is the same as the previous one except that we have kept the features
+that we saw previously: the buttons for displaying a clickable transcript. The
+code is longer, but it’s just a combination of the “clickable transcript” example
+from the previous lesson, and the code from earlier in this lesson.
-In this lesson, we are going to show:
-
-- The addTextTrack method for adding a TextTrack to an
- html <track> element,
-
-- The VTTCue constructor, for creating cues programmatically, and
+
In this lesson, we are going to show:
-- the addCue method for adding cues on the fly to a TextTrack etc.
+
+
The addTextTrack method for adding a TextTrack to an html <track> element,
+
The VTTCue constructor, for creating cues programmatically, and
+
the addCue method for adding cues on the fly to a TextTrack etc.
+
-These methods will allow us to create TextTrack objects and cues on the
-fly, programatically.
+
These methods will allow us to create TextTrack objects and cues on the fly,
+programatically.
-The presented example shows how we can create "sound sprites": small
-sounds that are parts of a mp3 file, and that can be played separately.
-Each sound will be defined as a cue in a track associated with
-the <audio> element.
+
The presented example shows how we can create “sound sprites”: small
+sounds that are parts of a mp3 file, and that can be played separately. Each sound
+will be defined as a cue in a track associated with the <audio> element.
-Let's create on the fly a WebVTT file with many cues, in order to cut a
-big sound file into segments and play them on demand
+
Let’s create on the fly a WebVTT file with many cues, in order to cut a big
+sound file into segments and play them on demand
-The idea is to create a track on the fly, then add cues within this
+
The idea is to create a track on the fly, then add cues within this
track. Each cue will be created with the id, the start and end
time taken from the above JavaScript object. In the end, we will have a
track with individual cues located at the time location where an animal
-sound is in the mp3 file.
+sound is in the mp3 file.
-Then we generate buttons in the HTML document, and when the user clicks
-on a button, the getCueById method is called, then the start and end
-time properties of the cue are accessed and the sound is played (using
-the currentTime property of the audio element).
+
Then we generate buttons in the HTML document, and when the user
+clicks on a button, the getCueById method is called, then the start and
+end time properties of the cue are accessed and the sound is played
+(using the currentTime property of the audio element).
-Polyfill for getCueById: Note that this method is not available on
-all browsers yet. A simple polyfill is used in the examples
+
Polyfill for getCueById: Note that this method is not
+available on all browsers yet. A simple polyfill is used in the examples
presented. If the getCueById method is not implemented (this is the case
-in some browsers), it's easy to use this small polyfill:
+in some browsers), it’s easy to use this small polyfill:
- Code extract!
+ JavaScript code!
-```
-1. // for browsers that do not implement the getCueById() method
+
1. // for browsers that do not implement the getCueById() method
2.
-3. // let's assume we're adding the getCueById function to a TextTrack object
-4. //named "track"
-5. if (typeof track.getCueById !== "function") {
+3. // let's assume we're adding the getCueById function to a TextTrack object
+4. //named "track"
+5. if (typeof track.getCueById !== "function") {
6. track.getCueById = function(id) {
7. var cues = track.cues;
8. for (var i = 0; i != track.cues.length; ++i) {
@@ -3667,648 +3537,642 @@ in some browsers), it's easy to use this small polyfill:
11. }
12. }
13. };
-14. }
-```
+14. }
Techniques
-To add a TextTrack to a track element, use
-the addTextTrack method (of the
-audio or video element). The function's signature is
-addTextTrack(kind[,label[,language]]) where kind is our familiar
-choice between subtitles, captions, chapters, etc. The optional label is
-any text you'd like to use describing the track; and the
-optional language is from our usual list of BCP-47 abbreviations, eg
-'de', 'en', 'es', 'fr' (etc).
-
-The VTTCue constructor enables us to create our own cue class-instances
-programmatically. We create a cue instance by using the new keyword. The
-constructor function expects three familiar arguments, thus: new
-VTTCue(startTime, endTime, id) - more detail is available from the
-MDN and the
-W3C's two applicable groups.
-
-To add cue-instances to a TextTrack on-the-fly, use the track object's
-addCue method, eg track.addCue(cue). The argument is a cue instance -
-as above. Note that the track must be a TextTrack object because
-addCue does not work with HTMLTrackElement Objects.
+
To add a TextTrack to a track element, use
+the addTextTrack method (of the
+audio or video element). The function’s signature is
+addTextTrack(kind[,label[,language]]) where kind is our familiar choice
+between subtitles, captions, chapters, etc. The optional label is any
+text you’d like to use describing the track; and the
+optional language is from our usual list of BCP-47 abbreviations,
+eg ‘de’, ‘en’, ‘es’, ‘fr’ (etc).
+
+
The VTTCue constructor enables us to create our own cue
+class-instances programmatically. We create a cue instance by using
+the new keyword. The constructor function expects three familiar
+arguments, thus: new VTTCue(startTime, endTime, id) - more detail is
+available
+from the
+MDN and the
+W3C’s two applicable groups.
+
+
To add cue-instances to a TextTrack on-the-fly, use the track
+object’s addCue method, eg track.addCue(cue). The argument is a cue
+instance - as above. Note that the track must be a
+TextTrack object because addCue does not work with
+HTMLTrackElement Objects.
HTML source code extract:
- Code extract!
-
-```
-...
-
Playing audio sprites with the track element
-
A demo by Sam Dutton, adapted for JsBin by M.Buffa
-
-
-...
- window.onload = function() {
- // Create an audio element programmatically
- var audio = newAudio("https://mainline.i3s.unice.fr/mooc/animalSounds.mp3");
-
- audio.addEventListener("loadedmetadata", function() {
- // When the audio file has its metadata loaded, we can add
- // a new track to it, with mode = hidden. It will fire events
- // even if it is hidden
- var track = audio.addTextTrack("metadata", "sprite track", "en");
- track.mode = "hidden";
-
- // for browsers that do not implement the getCueById() method
- if (typeof track.getCueById !== "function") {
- track.getCueById = function(id) {
- var cues = track.cues;
- for (var i = 0; i != track.cues.length; ++i) {
- if (cues[i].id === id) {
- return cues[i];
- }
- }
- };
- }
-
- var sounds = [
- {
- id: "purr",
- startTime: 0.200,
- endTime: 1.800
- },
- {
- id: "meow",
- startTime: 2.300,
- endTime: 3.300
- },
- ...
- ];
-
- for (var i = 0; i !== sounds.length; ++i) {
- // for each animal sound, create a cue with id, start and end time
- var sound = sounds[i];
- var cue = new VTTCue(sound.startTime, sound.endTime, sound.id);
- cue.id = sound.id;
- // add it to the track
- track.addCue(cue);
- // create a button and add it to the HTML document
- document.querySelector("#soundButtons").innerHTML +=
- ""; }
-
- var endTime;
- audio.addEventListener("timeupdate", function(event) {
- // When we play a sound, we set the endtime var.
- // We need to listen when the audio file is being played,
- // in order to pause it when endTime is reached.
- if (event.target.currentTime > endTime)
- event.target.pause();
- });
-
- function playSound(id) {
- // Plays the sound corresponding to the cue with id equal
- // to the one passed as a parameter. We set the endTime var
- // and position the audio currentTime at the start time
- // of the sound
- var cue = track.getCueById(id);
- audio.currentTime = cue.startTime;
- endTime = cue.endTime;
- audio.play();
- };
- // create listeners for all buttons
- var buttons = document.querySelectorAll("button.playSound");
- for(var i=; i < buttons.length; i++) {
- buttons[i].addEventListener("click", function(e) {
- playSound(this.id);
- });
- }
- });
-};
-```
+ JavaScript code!
+
+
1. ...
+2. <h1>Playing audio sprites with the track element</h1>
+3. <p>A demo by Sam Dutton, adapted for JsBin by M.Buffa</p>
+4.
+5. <div id="soundButtons" class="isSupported"></div>
+6. ...
+7. window.onload = function() {
+8. // Create an audio element programmatically
+9. var audio = newAudio("https://mainline.i3s.unice.fr/mooc/animalSounds.mp3");
+10.
+11. audio.addEventListener("loadedmetadata", function() {
+12. // When the audio file has its metadata loaded, we can add
+13. // a new track to it, with mode = hidden. It will fire events
+14. // even if it is hidden
+15. var track = audio.addTextTrack("metadata", "sprite track", "en");</b>
+16. track.mode = "hidden";</b>
+17.
+18. // for browsers that do not implement the getCueById() method
+19. if (typeof track.getCueById !== "function") {
+20. track.getCueById = function(id) {
+21. var cues = track.cues;
+22. for (var i = 0; i != track.cues.length; ++i) {
+23. if (cues[i].id === id) {
+24. return cues[i];
+25. }
+26. }
+27. };
+28. }
+29.
+30. var sounds = [
+31. {
+32. id: "purr",
+33. startTime: 0.200,
+34. endTime: 1.800
+35. },
+36. {
+37. id: "meow",
+38. startTime: 2.300,
+39. endTime: 3.300
+40. },
+41. ...
+42. ];
+43.
+44. for (var i = 0; i !== sounds.length; ++i) {
+45. // for each animal sound, create a cue with id, start and end time
+46. var sound = sounds[i];
+47. var cue = new VTTCue(sound.startTime, sound.endTime, sound.id); </b>
+48. cue.id = sound.id;
+49. // add it to the track
+50. track.addCue(cue);
+51. // create a button and add it to the HTML document
+52. document.querySelector("#soundButtons").innerHTML +=
+53. "<button class='playSound' id="
+54. + sound.id + ">" +sound.id
+55. + "</button>"; }
+56.
+57. var endTime;
+58. audio.addEventListener("timeupdate", function(event) {
+59. // When we play a sound, we set the endtime var.
+60. // We need to listen when the audio file is being played,
+61. // in order to pause it when endTime is reached.
+62. if (event.target.currentTime > endTime)
+63. event.target.pause();
+64. });
+65.
+66. function playSound(id) {
+67. // Plays the sound corresponding to the cue with id equal
+68. // to the one passed as a parameter. We set the endTime var
+69. // and position the audio currentTime at the start time
+70. // of the sound
+71. var cue = track.getCueById(id);
+72. audio.currentTime = cue.startTime;
+73. endTime = cue.endTime;
+74. audio.play();
+75. };
+76. // create listeners for all buttons
+77. var buttons = document.querySelectorAll("button.playSound");
+78. for(var i=; i < buttons.length; i++) {
+79. buttons[i].addEventListener("click", function(e) {
+80. playSound(this.id);
+81. });
+82. }
+83. });
+84. };
+85.
-
1.4.2 Update the Document in Sync with a Media Playing
+
1.4.2 Update the Document in Sync with a Media Playing
-Mixing JSON cue content with track and cue events, makes the
+
Mixing JSON cue content with track and cue events, makes the
synchronization of elements in the HTML document (while the video is
-playing) much easier.
+playing) much easier.
-Example of track event listeners that use JSON cue contents
+
Example of track event listeners that use JSON cue contents
-Here is a small code extract that shows how we can capture the JSON
+
Here is a small code extract that shows how we can capture the JSON
content of a cue when the video reaches its start time. We do this
-within a cuechange listener attached to a TextTrack:
+within a cuechange listener attached to a TextTrack:
- Code extract!
-
-```
-textTrack.oncuechange = function (){
- // "this" is the textTrack that fired the event.
- // Let's get the first active cue for this time segment
- var cue = this.activeCues[0];
- var obj = JSON.parse(cue.text);
- // do something
-}
-```
-
-
+ JavaScript code!
-
1. textTrack.oncuechange = function (){
+2. // "this" is the textTrack that fired the event.
+3. // Let's get the first active cue for this time segment
+4. var cue = this.activeCues[0];
+5. var obj = JSON.parse(cue.text);
+6. // do something
+7. }
-WARNING: as this Google service is no longer free of charge, you might
-see "for development purpose only" messages during the execution of this
-demo. You'll need a valid Google API key in order to remove these
-messages.
+
-
-
-
WARNING: as this Google service is no longer free of charge, you
+might see “for development purpose only” messages during the execution
+of this demo. You’ll need a valid Google API key in order to remove these messages.
We can acquire a cue DOM object using the techniques we have seen
previously, or by using the new HTML5 TextTrack getCueById() method.
-```
-var videoElement = document.querySelector("#myvideo");
-var textTracks = videoElement.textTracks; // one for each track element
-var textTrack = textTracks[0]; // corresponds to the first track element
-
-// Get a cue with ID="wikipedia"
-var cue = textTrack.getCueById("Wikipedia");
-```
+
1. var videoElement = document.querySelector("#myvideo");
+2. var textTracks = videoElement.textTracks; // one for each track element
+3. var textTrack = textTracks[0]; // corresponds to the first track element
+4. // Get a cue with ID="wikipedia"
+5. var cue = textTrack.getCueById("Wikipedia");
And once we have a cue object, it is possible to add event listeners to it:
-```
-cue.onenter = function(){
- // display something, play a sound, update any DOM element...
-};
-
-cue.onexit = function(){
- // do something else
-};
-```
+
1. cue.onenter = function(){
+2. // display something, play a sound, update any DOM element...
+3. };
+4.
+5. cue.onexit = function(){
+6. // do something else
+7. };
If the getCueById method is not implemented (this is the case in some
-browsers), we use the @@polyfill presented in the previous section:
-
-```
-// for browsers that do not implement the getCueById() method
-
-// let's assume we're adding the getCueById function to a TextTrack object
-//named "track"
-if (typeof track.getCueById !== "function") {
- track.getCueById = function(id) {
- var cues = track.cues;
- for (var i = 0; i != track.cues.length; ++i) {
- if (cues[i].id === id) {
- return cues[i];
- }
- }
- };
-}
-```
+browsers), we use the @@polyfill presented in the previous
+section:
-
Example that displays a wikipedia page and a google map while a video is playing
+
1. // for browsers that do not implement the getCueById() method
+2.
+3. // let's assume we're adding the getCueById function to a TextTrack object
+4. //named "track"
+5. if (typeof track.getCueById !== "function") {
+6. track.getCueById = function(id) {
+7. var cues = track.cues;
+8. for (var i = 0; i != track.cues.length; ++i) {
+9. if (cues[i].id === id) {
+10. return cues[i];
+11. }
+12. }
+13. };
+14. }
- HTML source code!
+ HTML code!
-```
-1.
-2.
-3.
-4.
-5. Example syncing element of the document with video metadata in webVTT file
-6.
-7.
-8.
-9.
-19.
-20.
+
- JavaScript source code extract!
+ JavaScript code!
-```
-1. window.onload = function() {
-2. var videoElement = document.querySelector("#myVideo");
-3. var myIFrame = document.querySelector("#myIframe");
-4. var currentURLSpan = document.querySelector("#currentURL");
+
1. window.onload = function() {
+2. var videoElement = document.querySelector("#myVideo");
+3. var myIFrame = document.querySelector("#myIframe");
+4. var currentURLSpan = document.querySelector("#currentURL");
5.
-6. var textTracks = videoElement.textTracks; // one for each track element
-7. var textTrack = textTracks[0]; // corresponds to the first track element
+6. var textTracks = videoElement.textTracks; // one for each track element
+7. var textTrack = textTracks[0]; // corresponds to the first track element
8.
-9. // change mode so we can use the track
-10. textTrack.mode = "hidden";
-11. // Default position on the google map
-12. var centerpos = new google.maps.LatLng(48.579400,7.7519);
+9. // change mode so we can use the track
+10. textTrack.mode = "hidden";
+11. // Default position on the google map
+12. var centerpos = new google.maps.LatLng(48.579400,7.7519);
13.
-14. // default options for the google map
-15. var optionsGmaps = {
-16. center:centerpos,
-17. navigationControlOptions: {style:
-18. google.maps.NavigationControlStyle.SMALL},
-19. mapTypeId: google.maps.MapTypeId.ROADMAP,
-20. zoom: 15
-21. };
+14. // default options for the google map
+15. var optionsGmaps = {
+16. center:centerpos,
+17. navigationControlOptions: {style:
+18. google.maps.NavigationControlStyle.SMALL},
+19. mapTypeId: google.maps.MapTypeId.ROADMAP,
+20. zoom: 15
+21. };
22.
-23. // Init map object
-24. var map = new google.maps.Map(document.getElementById("map"),
-25. optionsGmaps);
+23. // Init map object
+24. var map = new google.maps.Map(document.getElementById("map"),
+25. optionsGmaps);
26.
-27. // cue change listener, this is where the synchronization between
-28. // the HTML document and the video is done
-29. textTrack.oncuechange = function (){
-30. // we assume that we have no overlapping cues
-31. var cue = this.activeCues[0];
-32. if(cue === undefined) return;
+27. // cue change listener, this is where the synchronization between
+28. // the HTML document and the video is done
+29. textTrack.oncuechange = function (){
+30. // we assume that we have no overlapping cues
+31. var cue = this.activeCues[0];
+32. if(cue === undefined) return;
33.
-34. // get cue content as a JavaScript object
-35. var cueContentJSON = JSON.parse(cue.text);
+34. // get cue content as a JavaScript object
+35. var cueContentJSON = JSON.parse(cue.text);
36.
-37. // do different things depending on the type of sync (wikipedia, gmap)
-38. switch(cueContentJSON.type) {
-39. case'WikipediaPage':
-40. var myURL = cueContentJSON.url;
-41. var myLink = "" + myURL + "";
-42. currentURLSpan.innerHTML = myLink;
+37. // do different things depending on the type of sync (wikipedia, gmap)
+38. switch(cueContentJSON.type) {
+39. case'WikipediaPage':
+40. var myURL = cueContentJSON.url;
+41. var myLink = "<a href="" + myURL + "">" + myURL + "</a>";
+42. currentURLSpan.innerHTML = myLink;
43.
-44. myIFrame.src = myURL; // assign url to src property
-45. break;
-46. case 'LongLat':
-47. drawPosition(cueContentJSON.long, cueContentJSON.lat);
-48. break;
-49. }
-50. };
+44. myIFrame.src = myURL; // assign url to src property
+45. break;
+46. case 'LongLat':
+47. drawPosition(cueContentJSON.long, cueContentJSON.lat);
+48. break;
+49. }
+50. };
51.
-52. function drawPosition(long, lat) {
-53. // Make new object LatLng for Google Maps
-54. var latlng = new google.maps.LatLng(lat, long);
+52. function drawPosition(long, lat) {
+53. // Make new object LatLng for Google Maps
+54. var latlng = new google.maps.LatLng(lat, long);
55.
-56. // Add a marker at position
-57. var marker = new google.maps.Marker({
-58. position: latlng,
-59. map: map,
-60. title:"You are here"
-61. });
+56. // Add a marker at position
+57. var marker = new google.maps.Marker({
+58. position: latlng,
+59. map: map,
+60. title:"You are here"
+61. });
62.
-63. // center map on longitude and latitude
-64. map.panTo(latlng);
-65. }
-66. };
-```
+63. // center map on longitude and latitude
+64. map.panTo(latlng);
+65. }
+66. };
-All the critical work is done by the cuechange event listener, lines
-27-50. We have only the one track, so we set its mode to "hidden"
-(line 10) in order to be sure that it will be loaded, and that
-playing the video will fire cuechange events on it. The rest is just
-Google map code and classic DOM manipulation for updating HTML content
-(a span that will display the current URL, line 42).
-
-
-
1.5.1 Introduction
-
-Welcome to the WebAudio API lesson! I personnally love this API, playing
-with it is a lot of fun as you will discover! I hope you will like it as
-much as I do!
-
-The audio and video elements are used for playing streamed content, but
-we do not have a real control on the audio. They come with a powerful
-API as we saw during the previous course and the previous lessons of
-this course: we can build a custom user interface, make our own play, stop, pause buttons.
-
-We can control the video from JavaScript, listen to events and manage playlists, etc. However, we have no real control on the audio
-signal: fancy visualizations are impossible to do. The ones that dance
-with the music, and sound effects such as reverberation, delay, make an
+
All the critical work is done by the cuechange event
+listener, lines 27-50. We have only the one track, so we set its
+mode to “hidden” (line 10) in order to be sure that it will be
+loaded, and that playing the video will fire cuechange events on it. The
+rest is just Google map code and classic DOM manipulation for updating
+HTML content (a span that will display the current URL, line
+42).
+
+
1.5 The Web Audio API
+
+
Welcome to the WebAudio API lesson! I personnally love this API,
+playing with it is a lot of fun as you will discover! I hope you will
+like it as much as I do!
+
+
The audio and video elements are used for playing streamed content,
+but we do not have a real control on the audio. They come with a
+powerful API as we saw during the previous course and the previous
+lessons of this course: we can build a custom user interface, make our
+own play, stop, pause buttons.
+
+
We can control the video from JavaScript, listen to events and manage
+playlists, etc. However, we have no real control on the audio signal:
+fancy visualizations are impossible to do. The ones that dance with the
+music, and sound effects such as reverberation, delay, make an
equalizer, control the stereo, put the signal on the left or on the
right is impossible. Furthermore, playing multiple sounds in sync is
nearly impossible due to the streamed nature of the signal. For video
games, we will need to play very fast different sounds, and you cannot
-wait for the stream to arrive before starting playing it.
+wait for the stream to arrive before starting playing it.
-Web Audio is the solution to all these needs and with Web Audio you will
-be able to get the output signal from the audio and video elements, and
-process it with multiple effects. You will be able to work with samples
-loaded in memory. This will enable perfect syncing, accurate loops, you
-will be able to mix sounds, etc. You can also generate music
+
Web Audio is the solution to all these needs and with Web Audio you
+will be able to get the output signal from the audio and video elements,
+and process it with multiple effects. You will be able to work with
+samples loaded in memory. This will enable perfect syncing, accurate
+loops, you will be able to mix sounds, etc. You can also generate music
programmatically for creating synthetic sounds or virtual instruments.
This part will not be covered by this course even if I give links to
-interesting libraries and demos that do that.
+interesting libraries and demos that do that.
-Let’s have a look at some applications. The first thing I wanted to show
-you is just want we can do with the standard audio element.
+
Let’s have a look at some applications. The first thing I wanted to
+show you is just want we can do with the standard audio element.
-This is the standard audio element [music] that just plays a guitar
-riff that is coming from a server, but we can get control on the audio
+
This is the standard audio element [music] that just plays a guitar
+riff that is coming from a server, but we can get control on the audio
stream and do such things like that [music]. As you can see I control
the stereo balancing here, and we have a real time waveform and volume
-meters visualization.
+meters visualization.
-Another thing we can do is that we can load samples in memory.
+
Another thing we can do is that we can load samples in memory.
-This is an application I wrote for playing multitracks songs. So we are
-loading MP3s and decoding them in memory so that we can click anywhere
-on the song, I can make loops like this [music].
+
This is an application I wrote for playing multitracks songs. So we
+are loading MP3s and decoding them in memory so that we can click
+anywhere on the song, I can make loops like this [music].
-As you can see, we can isolate the tracks, we can mix them in real time.
+
As you can see, we can isolate the tracks, we can mix them in real
+time.
-Another application that works with samples in memory is this small
+
Another application that works with samples in memory is this small
example you will learn how to write it in the course: we loaded two
different short sounds [sounds] in memory and we can play them
repeatedly [sounds] or we can add effects like changing the pitch,
changing the volume with some random values and play them with random
-intervals [sounds].
+intervals [sounds].
-We can see that the application to video games is straightforward.
+
We can see that the application to video games is straightforward.
-Another thing you can do is use synthetics sounds, we will not cover the
-techniques, but you can use some libraries. This is a library that works
-with synthetic sounds, you do not have to load a file for having these
-sounds [sounds].
+
Another thing you can do is use synthetics sounds, we will not cover
+the techniques, but you can use some libraries. This is a library that
+works with synthetic sounds, you do not have to load a file for having
+these sounds [sounds].
-This is a library for making 8 bits sounds like the very first computers
-and video games in the 80's used to produce. You can also make very
-complex application, like a vocoder [sounds], or a synthesizer music instrument [sounds].
-Ok you have got the idea.
+
This is a library for making 8 bits sounds like the very first
+computers and video games in the 80’s used to produce. You can also make
+very complex application, like a vocoder [sounds], or a synthesizer
+music instrument [sounds]. Ok you have got the idea.
-This is all the interesting things you can do, and you can also learn
+
This is all the interesting things you can do, and you can also learn
how to debug such applications. I will make a video especially for that,
but using FireFox, you can activate, in the setting of the dev tools,
-the Web Audio debug tab.
+the Web Audio debug tab.
-I clicked here on Web Audio, and this added a new tab here Web Audio,
+
I clicked here on Web Audio, and this added a new tab here Web Audio,
and if I reload the page, I can see the graph corresponding to the route
-of the signal.
+of the signal.
-Here we have got a source -this is called the audio graph- so we've got
-the source, and we've got a destination. The source is the original sound. In that
-case it is a mediaElementAudioSource node that corresponds to the audio
-element here.
+
Here we have got a source -this is called the audio graph- so we’ve
+got the source, and we’ve got a destination. The source is the original
+sound. In that case it is a mediaElementAudioSource node that
+corresponds to the audio element here.
-The signal goes to another node that is provided by the Web Audio API
+
The signal goes to another node that is provided by the Web Audio API
and implemented natively in your browser, it is a StereoPanner for
-separating the sound between left and right.
+separating the sound between left and right.
-Then it goes to an analyzer here that will draw the blue waveform and
-finally to the destination, and the destination is the speakers. I also routed the
-signal to another part of the graph just for displaying two different
-analyzers corresponding to the left and right channels. This is for the
-volume meters here [music].
+
Then it goes to an analyzer here that will draw the blue waveform and
+finally to the destination, and the destination is the speakers. I also
+routed the signal to another part of the graph just for displaying two
+different analyzers corresponding to the left and right channels. This
+is for the volume meters here [music].
-And if you click on a node, you can see that some node have parameters.
+
And if you click on a node, you can see that some node have
+parameters.
-On the stereoPanner, that enables me to balance the sound to the left or
-to the right, you can see if I change that and click again, I can debug
-the different properties of each node. You will learn how to build this
-graph, how to assemble the different nodes, what are the most useful
-nodes for adding effects, controlling the volume, controlling the
-stereo, making a equalizer, creating fancy visualizations, and so on.
+
On the stereoPanner, that enables me to balance the sound to the left
+or to the right, you can see if I change that and click again, I can
+debug the different properties of each node. You will learn how to build
+this graph, how to assemble the different nodes, what are the most
+useful nodes for adding effects, controlling the volume, controlling the
+stereo, making a equalizer, creating fancy visualizations, and so
+on.
-Welcome to the Web Audio world and during a few lessons, you will learn
-step by step how to do such an application.
+
Welcome to the Web Audio world and during a few lessons, you will
+learn step by step how to do such an application.
-Shortcomings of the standard APIs that we have discussed so far...
+
Shortcomings of the standard APIs that we have discussed so far…
-In Module 2 of the HTML5 Coding Essentials course, you learned how to
-add an audio or video player to an HTML document, using the <audio>
-and <video> elements.
+
In Module 2 of the HTML5 Coding Essentials course, you learned how to
+add an audio or video player to an HTML document, using the
+<audio> and <video> elements.
For example:
-```
- part of the HTML code by:
+
Example #2: equalizer with a <video> element
-```
-
-```
+
We cloned the previous example and simply changed
+the <audio>…</audio> part of the HTML code by:
-And the example works in the same way, but this time with a video. Try
-moving the sliders to change the sound!
+
-
-
+ alt="Same example as previously but with a video above the equalizer." />
-
1.5.5 Waveforms
+
1.5.5 Waveforms
-Hi, Today I will show you how to write a waveform that will danse with
-the music. I prepared a small skeleton that is just composed of an audio
-element with a mp3 file that will be streamed when I press the play
-button.
+
Hi, Today I will show you how to write a waveform that will danse
+with the music. I prepared a small skeleton that is just composed of an
+audio element with a mp3 file that will be streamed when I press the
+play button.
-The sound is captured in an audio graph using a media element source
-like we explained in a previous lesson.
+
The sound is captured in an audio graph using a media element source
+like we explained in a previous lesson.
-Let's start from the beginning and look at this example. When the page
-is loaded, we go to the onload listener, we create an audio context
+
Let’s start from the beginning and look at this example. When the
+page is loaded, we go to the onload listener, we create an audio context
because we are going to work with the Web Audio API, and then we are
-just using the standard way for using a canvas.
+just using the standard way for using a canvas.
-We get the canvas, and we get the canvas context, so we are ready to
+
We get the canvas, and we get the canvas context, so we are ready to
draw things in the canvas here. Then we build an audio graph and then we
-start an animation... and for the moment the animation just draws an
-horizontal line 60 times per seconds. We will look at it later.
+start an animation… and for the moment the animation just draws an
+horizontal line 60 times per seconds. We will look at it later.
-Let's have a look at the audio graph. The audio graph is made of a
-source node that is the media element source. It corresponds to the audio element.
+
Let’s have a look at the audio graph. The audio graph is made of a
+source node that is the media element source. It corresponds to the
+audio element.
-Then we create an analyser, this is a special node that will provide on
-demand the time domain and frequency domain analysis data, and this data
-will be useful for drawing a waveform or for drawing frequencies that
-will dance with the music, or for drawing volume meters, or for doing
-beat detection. We will see examples in the next lessons.
+
Then we create an analyser, this is a special node that will provide
+on demand the time domain and frequency domain analysis data, and this
+data will be useful for drawing a waveform or for drawing frequencies
+that will dance with the music, or for drawing volume meters, or for
+doing beat detection. We will see examples in the next lessons.
-When you create an analyser node, you specify the size of the Fast
+
When you create an analyser node, you specify the size of the Fast
Fourier Transform (FFT), do not worry if you do not know exactly what
this technique is. It is just the size that will have an influence on
-the number of data you will have to draw.
+the number of data you will have to draw.
-Here for waveform, classical values are 1024 or 2048. They must be
-powers of 2.
+
Here for waveform, classical values are 1024 or 2048. They must be
+powers of 2.
-The number of data we will get depends on this size: it is exactly the
-FFT size divided by 2. In this example I set the FFT size to 1024 so I
-will get 512 data...
+
The number of data we will get depends on this size: it is exactly
+the FFT size divided by 2. In this example I set the FFT size to 1024 so
+I will get 512 data…
-Here is how we can declare a buffer that will get the data. It is called
-dataArray here and we will use it in the animation loop.
+
Here is how we can declare a buffer that will get the data. It is
+called dataArray here and we will use it in the animation loop.
-I've got a source node that corresponds to the audio stream, an analyser
-node that will analyse the stream, and then I connect the source to the analyser,
-and the analyser to the destination, and the destination is the
-speakers.
+
I’ve got a source node that corresponds to the audio stream, an
+analyser node that will analyse the stream, and then I connect the
+source to the analyser, and the analyser to the destination, and the
+destination is the speakers.
-Let's have a look at the animation loop! We clear the canvas, and in the
-end we call again the animation loop, so that the animation will be done 60 times
-per seconds... and the way we will draw the waveform is just a set of
-connected line that we call "a path".
+
Let’s have a look at the animation loop! We clear the canvas, and in
+the end we call again the animation loop, so that the animation will be
+done 60 times per seconds… and the way we will draw the waveform is just
+a set of connected line that we call “a path”.
-The "path" is a way of drawing that we presented during the HTML5 Part 1
-course.
+
The “path” is a way of drawing that we presented during the HTML5
+Part 1 course.
-So here, for drawing an horizontal, flat waveform, we just do a loop on
-the number of data. Here, we are not using the real data but we will do a loop 512
-times and we compute an increment in x, so that depending on the width of the canvas
-and on the number of data we have got to draw, we will add this
-increment to the x coordinate.
+
So here, for drawing an horizontal, flat waveform, we just do a loop
+on the number of data. Here, we are not using the real data but we will
+do a loop 512 times and we compute an increment in x, so that depending
+on the width of the canvas and on the number of data we have got to
+draw, we will add this increment to the x coordinate.
-And the y coordinate here is just faked because we use height/2.
+
And the y coordinate here is just faked because we use height/2.
-I can add a random element here if you like, and we will see that I can
-just fake some data to be drawn. So, instead of drawing this, I will use
-the real values. How can we get the data? In the animation loop, 60
+
I can add a random element here if you like, and we will see that I
+can just fake some data to be drawn. So, instead of drawing this, I will
+use the real values. How can we get the data? In the animation loop, 60
times per seconds, we call the analyser.getByteTimeDomainData and we
-pass the dataArray of the correct size.
+pass the dataArray of the correct size.
-And just after this call, the dataArray will contain the data we want to
-draw. And what is interesting here, is the value of each data... and we
-will compute the y coordinate depending on that. I'm just declaring a
+
And just after this call, the dataArray will contain the data we want
+to draw. And what is interesting here, is the value of each data… and we
+will compute the y coordinate depending on that. I’m just declaring a
variable that will get the value. This value is between 0 and 255
because we are working with bytes, and bytes are 8 bits encoded data and
-the value is between 0 and 255. I will normalize it. So now it's between
+the value is between 0 and 255. I will normalize it. So now it’s between
0 and 1. Then, in order to compute the y value, I just have to scale it
-to the height of the canvas, like this...
+to the height of the canvas, like this…
-And now if I play the sound, I've got the waveform that is animated.
+
And now if I play the sound, I’ve got the waveform that is animated.
-These 3 lines are very straightforward, just in order to transform a
-value between
+
These 3 lines are very straightforward, just in order to transform a
+value between 0 and 255 and scale it to the height of the canvas…
-0 and 255 and scale it to the height of the canvas...
+
-Do try on and study the code in [the JSBin example created during the
-Live Coding Video](https://jsbin.com/sequtas/edit)
-
-WebAudio offers an Analyser node that provides real-time frequency and
-time-domain analysis information. It leaves the audio stream unchanged
-from the input to the output, but allows us to acquire data about the
-sound signal being played. This data is easy for us to
+
WebAudio offers an Analyser node that provides real-time frequency
+and time-domain analysis information. It leaves the audio stream
+unchanged from the input to the output, but allows us to acquire data
+about the sound signal being played. This data is easy for us to
process since complex computations such as Fast Fourier Transforms are
-being executed, behind the scenes.
-
-Example #1: audio player with waveform visualization
+being executed, behind the scenes.
-[Example at JSBin](https://jsbin.com/sufatup/edit?html,js,output)
+
Example #1: audio player with waveform visualization
1. window.onload = function() {
2. // get the audio context
3. audioContext= ...;
4.
5. // get the canvas, its graphic context...
-6. canvas = document.querySelector("#myCanvas");
+6. canvas = document.querySelector("#myCanvas");
7. width = canvas.width;
8. height = canvas.height;
-9. canvasContext = canvas.getContext('2d');
+9. canvasContext = canvas.getContext('2d');
10.
11. // Build the audio graph with an analyser node at the end
12. buildAudioGraph();
13.
14. // starts the animation at 60 frames/s
15. requestAnimationFrame(visualize);
-16. };
-```
+16. };
-Step #1: build the audio graph with an analyser node at the end
+
Step #1: build the audio graph with an analyser node at the end
-If we want to visualize the sound that is coming out of the speakers, we
-have to put an analyser node at almost the end of the sound graph.
+
If we want to visualize the sound that is coming out of the speakers,
+we have to put an analyser node at almost the end of the sound graph.
Example #1 shows a typical use: an <audio> element,
a MediaElementElementSource node connected to an Analyser node, and the
analyser node connected to the speakers (audioContext.destination). The
visualization is a graphic animation that
uses the requestAnimationFrame API presented in teh W3C HTML5 Coding
-Essentials and Best Practices course (Module 4).
+Essentials and Best Practices course (Module 4).
- JavaScript source code!
+ JavaScript code!
-```
-1. function buildAudioGraph() {
-2. var mediaElement = document.getElementById('player');
+
1. function buildAudioGraph() {
+2. var mediaElement = document.getElementById('player');
3. var sourceNode = audioContext.createMediaElementSource(mediaElement);
4.
5. // Create an analyser node
@@ -5350,247 +5166,222 @@ Essentials and Best Practices course (Module 4).
13.
14. sourceNode.connect(analyser);
15. analyser.connect(audioContext.destination);
-16. }
-```
+16. }
-With the exception of lines 8-12, where we set the analyser options
-(explained later), we build the following graph (picture taken with the
-now discontinued FireFox WebAudio debugger, you should get similar
-results with the Chrome WebAudio Inspector extension):
-
+
With the exception of lines 8-12, where we set the analyser
+options (explained later), we build the following graph (picture taken
+with the now discontinued FireFox WebAudio debugger, you should get
+similar results with the Chrome WebAudio Inspector extension):
-
-
-
-Step #2: write the animation loop
+
Step #2: write the animation loop
-The visualization itself depends on the options which we set for the
+
The visualization itself depends on the options which we set for the
analyser node. In this case we set the FFT size to 1024 (FFT is a kind
of accuracy setting: the bigger the value, the more accurate the
analysis will be. 1024 is common for visualizing waveforms, while
lower values are preferred for visualizing frequencies). Here is what we
-set in this example:
+set in this example:
-1. analyser.fftSize = 1024;
-
-2. bufferLength = analyser.frequencyBinCount;
-
-3. dataArray = new Uint8Array(bufferLength);
-
-- Line 2: we set the size of the FFT,
+
+
analyser.fftSize = 1024;
+
bufferLength = analyser.frequencyBinCount;
+
dataArray = new Uint8Array(bufferLength);
+
-- Line 3: this is the byte array that will contain the data we want
- to visualize. Its length is equal to fftSize/2.
+
+
Line 2: we set the size of the FFT,
+
Line 3: this is the byte array that will contain the data we want to visualize. Its length is equal to fftSize/2.
+
-When we build the graph, these parameters are set - effectively as
-constants, to control the analysis during play-back.
+
When we build the graph, these parameters are set - effectively as
+constants, to control the analysis during play-back.
-Here is the code that is run 60 times per second to draw the waveform:
+
Here is the code that is run 60 times per second to draw the waveform:
- Source code!
+ JavaScript code!
-```
-1. function visualize() {
-2. // 1 - clear the canvas
-3. // like this: canvasContext.clearRect(0, 0, width, height);
-4.
-5. // Or use rgba fill to give a slight blur effect
-6. canvasContext.fillStyle = 'rgba(0, 0, 0, 0.5)';
-7. canvasContext.fillRect(0, 0, width, height);
-8.
-9. // 2 - Get the analyser data - for waveforms we need time domain data
-10. analyser.getByteTimeDomainData(dataArray);
+
1. function visualize() {
+2. // 1 - clear the canvas
+3. // like this: canvasContext.clearRect(0, 0, width, height);
+4.
+5. // Or use rgba fill to give a slight blur effect
+6. canvasContext.fillStyle = 'rgba(0, 0, 0, 0.5)';
+7. canvasContext.fillRect(0, 0, width, height);
+8.
+9. // 2 - Get the analyser data - for waveforms we need time domain data
+10. analyser.getByteTimeDomainData(dataArray);
11.
-12. // 3 - draws the waveform
-13. canvasContext.lineWidth = 2;
-14. canvasContext.strokeStyle = 'lightBlue';
-15.
-16. // the waveform is in one single path, first let's
-17. // clear any previous path that could be in the buffer
-18. canvasContext.beginPath();
-19.
-20. var sliceWidth = width / bufferLength;
-21. var x = 0;
-22.
-23. for(var i = 0; i < bufferLength; i++) {
-24. // dataArray values are between 0 and 255,
-25. // normalize v, now between 0 and 1
-26. var v = dataArray[i] / 255;
-27. // y will be in [0, canvas height], in pixels
-28. var y = v * height;
-29.
-30. if(i === 0) {
-31. canvasContext.moveTo(x, y);
-32. } else {
-33. canvasContext.lineTo(x, y);
-34. }
-35.
-36. x += sliceWidth;
-37. }
+12. // 3 - draws the waveform
+13. canvasContext.lineWidth = 2;
+14. canvasContext.strokeStyle = 'lightBlue';
+15.
+16. // the waveform is in one single path, first let's
+17. // clear any previous path that could be in the buffer
+18. canvasContext.beginPath();
+19.
+20. var sliceWidth = width / bufferLength;
+21. var x = 0;
+22.
+23. for(var i = 0; i < bufferLength; i++) {
+24. // dataArray values are between 0 and 255,
+25. // normalize v, now between 0 and 1
+26. var v = dataArray[i] / 255;
+27. // y will be in [0, canvas height], in pixels
+28. var y = v * height;
+29.
+30. if(i === 0) {
+31. canvasContext.moveTo(x, y);
+32. } else {
+33. canvasContext.lineTo(x, y);
+34. }
+35.
+36. x += sliceWidth;
+37. }
38.
-39. canvasContext.lineTo(canvas.width, canvas.height/2);
+39. canvasContext.lineTo(canvas.width, canvas.height/2);
40.
-41. // draw the path at once
-42. canvasContext.stroke();
-43.
-44. // once again call the visualize function at 60 frames/s
-45. requestAnimationFrame(visualize)Explanations:
-```
+41. // draw the path at once
+42. canvasContext.stroke();
+43.
+44. // once again call the visualize function at 60 frames/s
+45. requestAnimationFrame(visualize)Explanations:
-- Lines 9-10: we ask for the time domain analysis data. The call
- to getByteTimeDomainData(dataArray) will fill the array with values
- corresponding to the waveform to draw. The returned values are
- between 0 and 255. See the [specification for details about what
- they represent exactly in terms of audio
- processing.](https://webaudio.github.io/web-audio-api/#widl-AnalyserNode-getByteTimeDomainData-void-Uint8Array-array)
-
-Below are other examples that draw waveforms.
+
-Example #2: video player with waveform visualization
+
Below are other examples that draw waveforms.
-Using a <video> element is very similar to using an <audio> element.
-We have made no changes to the JavaScript code here; we Just
-changed "audio" to "video" in the HTML code.
+
Example #2: video player with waveform visualization
-[Example at JSBin](https://jsbin.com/fuyejuz/edit?html,js,console,output):
+
Using a <video> element is very similar to using an <audio> element.
+We have made no changes to the JavaScript code here; we Just changed “audio” to
+“video” in the HTML code.
-
-
-
-
-Example #3: both previous examples, this time with the graphic
-equalizer
+ alt="A video player with real time waveform visualization." />
-Adding the graphic equalizer to the graph changes nothing, we visualize
-the sound that goes to the speakers. Try lowering the slider values -
-you should see the waveform changing.
+
Example #3: both previous examples, this time with the graphic equalizer
-[Example at JSBin](https://jsbin.com/qijujuz/edit?html,js,output)
+
Adding the graphic equalizer to the graph changes nothing, we
+visualize the sound that goes to the speakers. Try lowering the slider
+values - you should see the waveform changing.
+ alt="Audio player with frequency visualisations with red bars." />
-
-
-
-
-This time, instead of a waveform we want to visualize an animated bar
-chart. Each bar will correspond to a frequency range and 'dance' in
-concert with the music being played.
-- The frequency range depends upon the sample rate of the signal (the
- audio source) and on the FFT size. While the sound is being played,
- the values change and the bar chart is animated.
+
This time, instead of a waveform we want to visualize an animated bar
+chart. Each bar will correspond to a frequency range and ‘dance’ in
+concert with the music being played.
-- The number of bars is equal to the FFT size / 2 (left screenshot
- with size = 512, right screenshot with size = 64).
-
-- In the example above, the Nth bar (from left to right) corresponds
- to the frequency range N * (samplerate/fftSize). If we have a
- sample rate equal to 44100 Hz and a FFT size equal to 512, then the
- first bar represents frequencies between 0 and 44100/512 = 86.12Hz.
- etc. As the amount of data returned by the analyser node is half the
- fft size, we will only be able to plot the frequency-range to half
- the sample rate. You will see that this is generally enough as
- frequencies in the second half of the sample rate are not relevant.
-
-- The height of each bar shows the strength of that specific frequency
- bucket. It's just a representation of how much of each frequency is
- present in the signal (i.e. how "loud" the frequency is).
+
+
The frequency range depends upon the sample rate of the signal (the audio source) and on the FFT size. While the sound is being played, the values change and the bar chart is animated.
+
The number of bars is equal to the FFT size / 2 (left screenshot with size = 512, right screenshot with size = 64).
+
In the example above, the Nth bar (from left to right) corresponds to the frequency range N * (samplerate/fftSize). If we have a sample rate equal to 44100 Hz and a FFT size equal to 512, then the first bar represents frequencies between 0 and 44100/512 = 86.12Hz. etc. As the amount of data returned by the analyser node is half the fft size, we will only be able to plot the frequency-range to half the sample rate. You will see that this is generally enough as frequencies in the second half of the sample rate are not relevant.
+
The height of each bar shows the strength of that specific frequency bucket. It’s just a representation of how much of each frequency is present in the signal (i.e. how “loud” the frequency is).
+
-You do not have to master the signal processing 'plumbing'
-summarised above - just plot the reported values!
+
You do not have to master the signal processing ‘plumbing’
+summarised above - just plot the reported values!
-Enough said! Let's study some extracts from the source code.
+
Enough said! Let’s study some extracts from the source code.
-This code is very similar to the first example given at the top of this
-page. We've set the FFT size to a lower value, and rewritten the
-animation loop to plot frequency bars instead of a waveform:
+
This code is very similar to the first example given at the top of
+this page. We’ve set the FFT size to a lower value, and rewritten the
+animation loop to plot frequency bars instead of a waveform:
- Code extract!
+ Code code!
-```
-1. function buildAudioGraph() {
-2. ...
-3. // Create an analyser node
-4. analyser = audioContext.createAnalyser();
-5.
-6. // Try changing to lower values: 512, 256, 128, 64...
-7. // Lower values are good for frequency visualizations,
-8. // try 128, 64 etc.?
+
1. function buildAudioGraph() {
+2. ...
+3. // Create an analyser node
+4. analyser = audioContext.createAnalyser();
+5.
+6. // Try changing to lower values: 512, 256, 128, 64...
+7. // Lower values are good for frequency visualizations,
+8. // try 128, 64 etc.?
9. analyser.fftSize = 256;
10. ...
-11. }
-```
+11. }
-This time, when building the audio graph, we have used a smaller FFT
+
This time, when building the audio graph, we have used a smaller FFT
size. Values between 64 and 512 are very common here. Try them in the
JSBin example! Apart from the lines in bold, this function is exactly
-the same as in the first example.
+the same as in the first example.
1. function visualize() {
2. // clear the canvas
3. canvasContext.clearRect(0, 0, width, height);
4.
@@ -5601,16 +5392,16 @@ The new visualization code:
9. var barHeight;
10. var x = 0;
11.
-12. // values go from 0 to 255 and the canvas heigt is 100. Let's rescale
+12. // values go from 0 to 255 and the canvas heigt is 100. Let's rescale
13. // before drawing. This is the scale factor
14. heightScale = height/128;
15.
-16. for(var i = 0; i < bufferLength; i++) {
+16. for(var i = 0; i < bufferLength; i++) {
17. // between 0 and 255
18. barHeight = dataArray[i];
19.
20. // The color is red but lighter or darker depending on the value
-21. canvasContext.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
+21. canvasContext.fillStyle = 'rgb(' + (barHeight+100) + ',50,50)';
22. // scale from [0, 255] to the canvas height [0, height] pixels
23. barHeight *= heightScale;
24. // draw the bar
@@ -5622,80 +5413,64 @@ The new visualization code:
30.
31. // once again call the visualize function at 60 frames/s
32. requestAnimationFrame(visualize);
-33. }
-```
+33. }
Explanations:
-- Line 6: this is different to code which draws a waveform! We ask
- for byteFrequencyData (vs byteTimeDomainData earlier) and it returns
- an array of fftSize/2 values between 0 and 255.
-
-- Lines 16-29: we iterate on the value. The x position of each bar
- is incremented at each iteration (line 28) adding a small interval
- of 1 pixel between bars (you can try different values here). The
- width of each bar is computed at line 8.
-
-- Line 14: we compute a scale factor to be able to display the
- values (ranging from 0 to 255) in direct proportion to the height of
- the canvas. This scale factor is used in line 23, when we compute
- the height of the bars we are going to draw.
-
-Other examples: achieving more impressive frequency visualization
+
+
Line 6: this is different to code which draws a waveform! We ask for byteFrequencyData (vs byteTimeDomainData earlier) and it returns an array of fftSize/2 values between 0 and 255.
+
Lines 16-29: we iterate on the value. The x position of each bar is incremented at each iteration (line 28) adding a small interval of 1 pixel between bars (you can try different values here). The width of each bar is computed at line 8.
+
Line 14: we compute a scale factor to be able to display the values (ranging from 0 to 255) in direct proportion to the height of the canvas. This scale factor is used in line 23, when we compute the height of the bars we are going to draw.
+
-[Example at
-JSBin](https://jsbin.com/muzifi/edit?html,css,js,output) with a
-different look for the visualization: please read the source code
-and try to understand how the drawing of the frequency is done.
+
Other examples: achieving more impressive frequency visualization
+
Example at
+JSBin with a different look for the visualization: please read the
+source code and try to understand how the drawing of the frequency is
+done.
-
-
-[Last example at
-JSBin](https://jsbin.com/fekorej/edit?html,js,output) with this time the
-graphic equalizer, a master volume (gain) and a stereo panner node just
-before the visualizer node:
+ alt="Same example as before but with symmetric and colored frequency visualizations." />
+
Last example
+at JSBin with this time the graphic equalizer, a master volume
+(gain) and a stereo panner node just before the visualizer node:
-
-
-And here is the audio graph for this example (picture taken with the now
-discontinued FireFox WebAudio debugger, you should get similar results
-with the Chrome WebAudio Inspector extension):
+ alt="Previous example with a master volume (gain node) and the equalizer + a stereoPanner node." />
+
And here is the audio graph for this example (picture taken with the
+now discontinued FireFox WebAudio debugger, you should get similar
+results with the Chrome WebAudio Inspector extension):
-
-
-
-Source code from this example's the buildAudioGraph function:
+
Source code from this example’s the buildAudioGraph function:
- Code extract!
+ JavaScript code!
-```
-1. function buildAudioGraph() {
-2. var mediaElement = document.getElementById('player');
+
1. function buildAudioGraph() {
+2. var mediaElement = document.getElementById('player');
3. var sourceNode = audioContext.createMediaElementSource(mediaElement);
4.
5. // Create an analyser node
@@ -5711,14 +5486,14 @@ Source code from this example's the buildAudioGraph function:
15. [60, 170, 350, 1000, 3500, 10000].forEach(function(freq, i) {
16. var eq = audioContext.createBiquadFilter();
17. eq.frequency.value = freq;
-18. eq.type = "peaking";
+18. eq.type = "peaking";
19. eq.gain.value = 0;
20. filters.push(eq);
21. });
22.
23. // Connect filters in sequence
24. sourceNode.connect(filters[0]);
-25. for(var i = 0; i < filters.length - 1; i++) {
+25. for(var i = 0; i < filters.length - 1; i++) {
26. filters[i].connect(filters[i+1]);
27. }
28.
@@ -5737,45 +5512,43 @@ Source code from this example's the buildAudioGraph function:
41. // Connect the stereo panner to analyser and analyser to destination
42. stereoPanner.connect(analyser);
43. analyser.connect(audioContext.destination);
-44. }
-```
+44. }
-
1.5.7 Volume Meters
+
1.5.7 Volume Meters
-Important note: the volume meter implementations below use rough
-approximations and cannot be taken as the most accurate way to compute
-an exact volume. See at the end of the page for some extra explanations,
-as well as links to better (and more complex) implementations.
-
-Example #1: add a single volume meter to the audio player
+
Important note: the volume meter implementations below use
+rough approximations and cannot be taken as the most accurate way to
+compute an exact volume. See at the end of the page for some extra
+explanations, as well as links to better (and more complex)
+implementations.
-[Try it at JSBin](https://jsbin.com/kuciset/edit?html,css,js,output):
+
Example #1: add a single volume meter to the audio player
-
+ alt="Single volume meter that dances with the music." />
-In order to have a "volume meter" which traces upward/downward with the
-intensity of the music, we will compute the average intensity of our
+
In order to have a “volume meter” which traces upward/downward with
+the intensity of the music, we will compute the average intensity of our
frequency ranges, and draw this value using a nice gradient-filled
-rectangle.
+rectangle.
-Here are the two functions we will call from the animation loop:
+
Here are the two functions we will call from the animation loop:
1. function drawVolumeMeter() {
2. canvasContext.save();
3.
4. analyser.getByteFrequencyData(dataArray);
@@ -5797,47 +5570,43 @@ Here are the two functions we will call from the animation loop:
20. var length = array.length;
21.
22. // get all the frequency amplitudes
-23. for (var i = 0; i < length; i++) {
+23. for (var i = 0; i < length; i++) {
24. values += array[i];
25. }
26.
27. average = values / length;
28. return average;
-29. }
-```
+29. }
-Note that we are measuring intensity (line 4) and once the frequency
-analysis data is copied into the dataarray, we
-call the getAverageVolume function (line 5) to compute the average
-value which we will draw as the volume meter.
+
Note that we are measuring intensity (line 4) and once the
+frequency analysis data is copied into the dataarray, we
+call the getAverageVolume function (line 5) to compute the
+average value which we will draw as the volume meter.
-This is how we create the gradient:
+
This is how we create the gradient:
- Code extract!
+ JavaScript code!
-```
-1. // create a vertical gradient of the height of the canvas
-2. gradient = canvasContext.createLinearGradient(0,0,0, height);
-3. gradient.addColorStop(1,'#000000');
-4. gradient.addColorStop(0.75,'#ff0000');
-5. gradient.addColorStop(0.25,'#ffff00');
-6. gradient.addColorStop(0,'#ffffff');
-```
+
1. // create a vertical gradient of the height of the canvas
+2. gradient = canvasContext.createLinearGradient(0,0,0, height);
+3. gradient.addColorStop(1,'#000000');
+4. gradient.addColorStop(0.75,'#ff0000');
+5. gradient.addColorStop(0.25,'#ffff00');
+6. gradient.addColorStop(0,'#ffffff');
-And here is what the new animation loop looks like (for the sake of
+
And here is what the new animation loop looks like (for the sake of
clarity, we have moved the code that draws the signal waveform to a
-separate function):
+separate function):
1. function visualize() {
2.
3. clearCanvas();
4.
@@ -5846,85 +5615,76 @@ separate function):
7.
8. // call again the visualize function at 60 frames/s
9. requestAnimationFrame(visualize);
-10. }
-```
+10. }
-Notice that we used the best practices seen in week 3 of the HTML5 part
-1 course: we saved and restored the context in all functions that change
-something in the canvas context (see
-function drawVolumeMeter and drawWaveForm in the source code).
+
Notice that we used the best practices seen in week 3 of the HTML5
+part 1 course: we saved and restored the context in all functions that
+change something in the canvas context (see
+function drawVolumeMeter and drawWaveForm in the source code).
-Example #2: draw two volume meters, one for each stereo channel
+
Example #2: draw two volume meters, one for each stereo channel
-This time, let's split the audio signal and create a separate analyser
-for each output channel. We retain the analyser node that is being used
-to draw the waveform, as this works on the stereo signal
-(and is connected to the destination in order to hear full audio).
+
This time, let’s split the audio signal and create a
+separate analyser for each output channel. We retain the analyser node
+that is being used to draw the waveform, as this works on the stereo
+signal (and is connected to the destination in order to hear full
+audio).
-We added a stereoPanner node right after the source and a left/right
+
We added a stereoPanner node right after the source and a left/right
balance slider to control its pan property. Use this slider to see how
-the left and right volume meter react.
+the left and right volume meter react.
-In order to isolate the left and the right channel (for creating
+
In order to isolate the left and the right channel (for creating
individual volume meters), we used a new node called a Channel Splitter
node. From this node, we created two routes, each going to a separate
-analyser (lines 46 and 47 of the example below)
-
-- See the ChannelSplitterNode's documentation. Notice that there is
- also a ChannelMergerNode for merging multiple routes into a single
- stereo signal.
+analyser (lines 46 and 47 of the example below)
-- Use the connect method with extra parameters to connect the
- different outputs of the channel splitter node:
-
-- connect(node, 0, 0) to connect the left output channel to another
- node,
-
-- connect(node, 1, 0) to connect the right output channel to another
- node,
-
-[Example at JSBin](https://jsbin.com/qezevew/edit?html,css,js,output):
+
+
See the ChannelSplitterNode’s documentation. Notice that there is also a ChannelMergerNode for merging multiple routes into a single stereo signal.
+
Use the connect method with extra parameters to connect the different outputs of the channel splitter node:
+
connect(node, 0, 0) to connect the left output channel to another node,
+
connect(node, 1, 0) to connect the right output channel to another node,
-
+ alt="Example with stereo volume meters." />
-This is the audio graph we've built (picture taken with the now
+
This is the audio graph we’ve built (picture taken with the now
discontinued FireFox WebAudio debugger, you should get similar results
-with the Chrome WebAudio Inspector extension):
-
+with the Chrome WebAudio Inspector extension):
-
-
-
-As you can see there are two routes: the one on top sends the output
+
As you can see there are two routes: the one on top sends the output
signal to the speakers and uses an analyser node to animate the
waveform, meanwhile the one at the bottom splits the signal and send its
left and right parts to separate analyser nodes which draw the two
volume meters. Just before the split, we added a stereoPanner to enable
-adjustment of the left/right balance with a slider.
+adjustment of the left/right balance with a slider.
Source code extract:
-
- Source code extract!
+
+ JavaScript code!
-```
-1. function buildAudioGraph() {
-2. var mediaElement = document.getElementById('player');
+
1. function buildAudioGraph() {
+2. var mediaElement = document.getElementById('player');
3. var sourceNode = audioContext.createMediaElementSource(mediaElement);
4.
5. // connect the source node to a stereo panner
@@ -5948,7 +5708,7 @@ adjustment of the left/right balance with a slider.
23. // stereoPanner node, with two analysers for the meters
24.
25. // Two analysers for the stereo volume meters
-26. // Here we use a small FFT value as we're gonna work with
+26. // Here we use a small FFT value as we're gonna work with
27. // frequency analysis data
28. analyserLeft = audioContext.createAnalyser();
29. analyserLeft.fftSize = 256;
@@ -5974,18 +5734,16 @@ adjustment of the left/right balance with a slider.
49. // No need to connect these analysers to something, the sound
50. // is already connected through the route that goes through
51. // the analyser used for the waveform
-52. }
-```
+52. }
And here is the new function for drawing the two volume meters:
1. function drawVolumeMeters() {
2. canvasContext.save();
3.
4. // set the fill style to a nice gradient
@@ -6006,212 +5764,199 @@ adjustment of the left/right balance with a slider.
19. canvasContext.fillRect(26,height-averageRight,25,height);
20.
21. canvasContext.restore();
-22. }
-```
+22. }
-The code is very similar to the previous one. We draw two rectangles
+
The code is very similar to the previous one. We draw two rectangles
side-by-side, corresponding to the two analyser nodes - instead of the
-single display in the previous example.
+single display in the previous example.
Extra explanations and resources
-Indeed, the proposed examples are ok for making things "dancing in
-music" but rather inaccurate if you are looking for a real volume meter.
+
Indeed, the proposed examples are ok for making things “dancing in
+music” but rather inaccurate if you are looking for a real volume meter.
Results may also change if you modify the size of the fft in the
analyser node properties. There are accurate implementations of volume
-meters in WebAudio (see this [volume meter
-example](https://github.com/cwilso/volume-meter)) but they use nodes
-that were out of the scope for this course. Also, a student from this
-course named "SoundSpinning" proposed also another approximation that
-gives more stable results. Read below:
+meters in WebAudio (see this volume meter example)
+but they use nodes that were out of the scope for this course. Also, a
+student from this course named “SoundSpinning” proposed also another
+approximation that gives more stable results. Read below:
-SoundSpinning: "The only half close way I found for the meter
+
SoundSpinning: “The only half close way I found for the meter
levels is to use getFloatTimeDomainDatadata from the analyser, which
seems to give a normalized array between -1 and 1. Then just plot the
actual wave level values as we loop in the canvas rendering. This is
still not great, since the canvas works at 60Hz while (most of the
times) audio sampling is 44.1kHz, but it is closer. This also keeps the
-same levels no matter whatFFTsizeyou apply."
-Here is a codepen with my proposed meters.
-
+same levels no matter whatFFTsizeyou apply.”
+Here is a codepen with my proposed meters.
-
1.5.8 Sound Samples Loaded in Memory
+
1.5.8 Sound Samples Loaded in Memory
-For some applications, it may be necessary to load sound samples into
-memory and uncompress them before they can be used.
+
For some applications, it may be necessary to load sound samples into
+memory and uncompress them before they can be used.
-No streaming/decoding in real time means less CPU is used,
+
No streaming/decoding in real time means less CPU is used,
-With all samples loaded in memory, it's possible to play them in sync
-with great precision,
+
With all samples loaded in memory, it’s possible to play them in sync
+with great precision,
-It's possible to make loops, add effects, change the playback rate, etc.
+
It’s possible to make loops, add effects, change the playback rate,
+etc.
-And of course, if they are in memory and uncompressed, there is no wait
-time for them to start playing: they are ready to be used immediately!
+
And of course, if they are in memory and uncompressed, there is no
+wait time for them to start playing: they are ready to be used
+immediately!
-These features are useful in video games: where a library of sounds may
-need to ready to be played. By changing the playback rate or the
+
These features are useful in video games: where a library of sounds
+may need to ready to be played. By changing the playback rate or the
effects, many different sounds can be created, even with a limited
number of samples (for instance, an explosion played at different speed,
-with different effects).
+with different effects).
-Let's try some demos!
+
Let’s try some demos!
-Here is a first [example at
-JSBin](https://jsbin.com/gojuxo/edit?html,js,console,output): click on
-the different buttons. Only two minimal sound samples are used in this
-example: [shot1.mp3](https://mainline.i3s.unice.fr/mooc/shoot1.mp3) and [shot2.mp3](https://mainline.i3s.unice.fr/mooc/shoot2.mp3).
-You can download many free sound samples like these from
-the [freesound.org](https://freesound.org/) Web site.
+
Here is a first example at
+JSBin: click on the different buttons. Only two minimal sound
+samples are used in this example: shot1.mp3 and shot2.mp3. You
+can download many free sound samples like these from the freesound.org Web site.
-Here is how the WebAudio graph looks like (picture taken with the now
+
Here is how the WebAudio graph looks like (picture taken with the now
discontinued FireFox WebAudio debugger, you should get similar results
-with the Chrome WebAudio Inspector extension):
-
+with the Chrome WebAudio Inspector extension):
-
-
-
-
-Music applications such as Digital Audio Workstations (GarageBand-like
-apps) will need to play/record/loop music tracks in memory.
-
-[Try this impressive DAW](https://remixxer.com/app/) that uses free
-sound samples from freesound.org! Each instrument is a small audio file
-that contains all the notes played on a real instrument. When you play a
-song (midi file) the app will play-along, selecting the same musical
-note from the corresponding instrument audio sample. This is all done
-with Web Audio and samples loaded in memory:
-
+ title="Screenshot with buttons that play sound samples many times with different
+ pitch, volume, interval of times"
+ alt="Screenshot with buttons that play sound samples many times with different
+ pitch, volume, interval of times." />
+
+
Music applications such as Digital Audio Workstations
+(GarageBand-like apps) will need to play/record/loop music tracks in
+memory.
+
+
Try this impressive DAW that
+uses free sound samples from freesound.org! Each instrument is a small
+audio file that contains all the notes played on a real instrument. When
+you play a song (midi file) the app will play-along, selecting the same
+musical note from the corresponding instrument audio sample. This is all
+done with Web Audio and samples loaded in memory:
-
+ alt="The remixer DAW workstation, a typical screenshot of a DAW with tracks, mix table etc." />
-The author of this course wrote a multitrack audio player: it loads
+
The author of this course wrote a multitrack audio player: it loads
different mp3 files corresponding to different instruments and play/loop
-them in sync.
-
-[You can try](https://mainline.i3s.unice.fr/) it or [get the sources on
-GitHub](https://github.com/squallooo/MT5). The documentation is in the
-help menu.
+them in sync.
-
-
-Try also this small demonstration that uses the [Howler.js
-library](https://goldfirestudios.com/blog/104/howler.js-Modern-Web-Audio-Javascript-Library) for
-loading sound samples in memory and playing them using WebAudio (we'll
-discuss this library later). Click on the main window and notice how
-fast the sound effects are played. Click as fast as you can!
+ alt="Screenshot of MT5 a multitrack player." />
-[Try the explosion demo at JSBin](https://jsbin.com/gefezu/edit):
+
Try also this small demonstration that uses the Howler.js
+library for loading sound samples in memory and playing them using
+WebAudio (we’ll discuss this library later). Click on the main window
+and notice how fast the sound effects are played. Click as fast as you
+can!
-Use an AudioBufferSourceNode as the source of the sound sample in the
-Web Audio graph.
-
-There is a special node in Web Audio for handling sound samples, called
-an AudioBufferSourceNode.
-
-This node has different properties:
-
-- buffer: the decoded sound sample.
-
-- loop: should the sample be played as an infinite loop - when the
- sample has played to its end, it is re-started from the beginning.
- (default is True), it also depends on the two next properties.
+
Use an AudioBufferSourceNode as the source of the sound sample in the
+Web Audio graph.
-- loopStart: a double value indicating, in seconds, in the buffer
- sample playing must restart. Its default value is 0.
+
There is a special node in Web Audio for handling sound samples,
+called an AudioBufferSourceNode.
-- loopEnd: a double value indicating, in seconds, at what point in the
- buffer sample playing must stop (and eventually loop again). Its
- default value is 0.
+
This node has different properties:
-- playbackRate: the speed factor at which the audio asset will be
- played. Since no pitch correction is applied on the output, this can
- be used to change the pitch of the sample.
-
-- detune: not relevant for this course.
+
+
buffer: the decoded sound sample.
+
loop: should the sample be played as an infinite loop - when the sample has played to its end, it is re-started from the beginning. (default is True), it also depends on the two next properties.
+
loopStart: a double value indicating, in seconds, in the buffer sample playing must restart. Its default value is 0.
+
loopEnd: a double value indicating, in seconds, at what point in the buffer sample playing must stop (and eventually loop again). Its default value is 0.
+
playbackRate: the speed factor at which the audio asset will be played. Since no pitch correction is applied on the output, this can be used to change the pitch of the sample.
+
detune: not relevant for this course.
+
Loading and decoding a sound sample
-Before use, a sound sample must be loaded using Ajax, decoded, and set
-to the buffer property of an AudioBufferSourceNode.
-
-[Try the example at JSBin](https://jsbin.com/botagas/edit?html,js,console,output):
+
Before use, a sound sample must be loaded using Ajax, decoded, and
+set to the buffer property of an AudioBufferSourceNode.
-
-
-
+ alt="Example that loads and play a unique sound." />
-In this example, as soon as the page is loaded, we send an Ajax request
-to a remote server in order to get the file shoot2.mp3. When the file is
-loaded, we decode it. Then we enable the button (before the sample was
-not available, and thus could not be played). Now you can click on the
-button to make the noise.
+
In this example, as soon as the page is loaded, we send an Ajax
+request to a remote server in order to get the file shoot2.mp3. When the
+file is loaded, we decode it. Then we enable the button (before the
+sample was not available, and thus could not be played). Now you can
+click on the button to make the noise.
-Notice in the code that each time we click on the button, we rebuild the
-audio graph.
+
Notice in the code that each time we click on the button, we rebuild
+the audio graph.
-> This is because AudioBufferSourceNodes can be used only once!
->
-> But don't worry, Web Audio is optimized for handling thousands of
-> nodes...
+
+
This is because AudioBufferSourceNodes can be used only once!
+ But don’t worry, Web Audio is optimized for handling thousands of nodes…
1. var ctx;
2.
3. var soundURL =
-4. 'https://mainline.i3s.unice.fr/mooc/shoot2.mp3';
+4. 'https://mainline.i3s.unice.fr/mooc/shoot2.mp3';
5. var decodedSound;
6.
7. window.onload = function init() {
@@ -6235,24 +5980,24 @@ audio graph.
25. function loadSoundUsingAjax(url) {
26. var request = new XMLHttpRequest();
27.
-28. request.open('GET', url, true);
-29. // Important: we're loading binary data
-30. request.responseType = 'arraybuffer';
+28. request.open('GET', url, true);
+29. // Important: we're loading binary data
+30. request.responseType = 'arraybuffer';
31.
32. // Decode asynchronously
33. request.onload = function() {
-34. console.log("Sound loaded");
+34. console.log("Sound loaded");
35.
-36. // Let's decode it. This is also asynchronous
+36. // Let's decode it. This is also asynchronous
37. ctx.decodeAudioData(request.response,
38. function(buffer) { // success
-39. console.log("Sound decoded");
+39. console.log("Sound decoded");
40. decodedSound = buffer;
41. // we enable the button
42. playButton.disabled = false;
43. },
44. function(e) { // error
-45. console.log("error");
+45. console.log("error");
46. }
47. ); // end of decodeAudioData callback
48. }; // end of the onload callback
@@ -6268,109 +6013,71 @@ audio graph.
58. bufferSource.buffer = buffer;
59. bufferSource.connect(ctx.destination);
60. bufferSource.start(); // remember, you can start() a source only once!
-61. }
-```
+61. }
Explanations:
-- When the page is loaded, we first call
- the loadSoundUsingAjax function for loading and decoding the sound
- sample (line 16), then we define a click listener for the play
- button. Loading and decoding the sound can take some time, so it's
- an asynchronous process. This means that the call
- to loadSoundUsingAjax will return while the downloading and decoding
- is still in progress. We can define a click listener on the button
- anyway, as it is disabled by default (see the HTML code). Once the
- sample has been loaded and decoded, only then will the button be
- enabled (line 42).
-
-- The loadSoundUsingAjax function will first create
- an XmlHttpRequest using the "new version of Ajax called XhR2"
- (described in detail during week 3). First we create the request
- (lines 26-30): notice the use of 'arrayBuffer' as
- a responseType for the request. This has been introduced by Xhr2
- and is necessary for binary file transfer. Then the request is sent
- (line 52).
-
-- Ajax is an asynchronous process: once the browser receives the
- requested file, the request. onload callback will be called (it is
- defined at line 33), and we can decode the file (an mp3, the
- content of which must be uncompressed in memory). This is done by
- calling ctx.decodeAudioData(file, successCallback, errorCallback).
- When the file is decoded, the success callback is called (lines
- 38-43). We store the decoded buffer in the variable decodedSound,
- and we enable the button.
-
-- Now, when someone clicks on the button, the playSound function will
- be called (lines 55-61). This function builds a simple audio
- graph: it creates an AudioBufferSourceNode (line 57), sets
- its buffer property with the decoded sample, connects this source to
- the speakers (line 59) and plays the sound. Source nodes can
- only be used once (a "fire and forget" philosophy), so to play the
- sound again, we have to rebuild a source node and connect that to
- the destination. This seems strange when you learn Web Audio, but
- don't worry - it's a very fast operation, even with hundreds of
- nodes.
+
+
When the page is loaded, we first call the loadSoundUsingAjax function for loading and decoding the sound sample (line 16), then we define a click listener for the play button. Loading and decoding the sound can take some time, so it’s an asynchronous process. This means that the call to loadSoundUsingAjax will return while the downloading and decoding is still in progress. We can define a click listener on the button anyway, as it is disabled by default (see the HTML code). Once the sample has been loaded and decoded, only then will the button be enabled (line 42).
+
The loadSoundUsingAjax function will first create an XmlHttpRequest using the “new version of Ajax called XhR2” (described in detail during week 3). First we create the request (lines 26-30): notice the use of ‘arrayBuffer’ as a responseType for the request. This has been introduced by Xhr2 and is necessary for binary file transfer. Then the request is sent (line 52).
+
Ajax is an asynchronous process: once the browser receives the requested file, the request. onload callback will be called (it is defined at line 33), and we can decode the file (an mp3, the content of which must be uncompressed in memory). This is done by calling ctx.decodeAudioData(file, successCallback, errorCallback). When the file is decoded, the success callback is called (lines 38-43). We store the decoded buffer in the variable decodedSound, and we enable the button.
+
Now, when someone clicks on the button, the playSound function will be called (lines 55-61). This function builds a simple audio graph: it creates an AudioBufferSourceNode (line 57), sets its buffer property with the decoded sample, connects this source to the speakers (line 59) and plays the sound. Source nodes can only be used once (a “fire and forget” philosophy), so to play the sound again, we have to rebuild a source node and connect that to the destination. This seems strange when you learn Web Audio, but don’t worry - it’s a very fast operation, even with hundreds of nodes.
+
Loading and decoding multiple sounds: the BufferLoader utility
The problem: AJax requests are asynchronous
-The asynchronous aspect of Ajax has always been problematic for
+
The asynchronous aspect of Ajax has always been problematic for
beginners. For example, if our applications use multiple sound samples
and we need to be sure that all of them are loaded and decoded, using
the code we presented in the earlier example will not work as is. We
-cannot call:
+cannot call:
-... because we will never know exactly when all the sounds have finished
-being loaded and decoded. All these calls will run operations in the
-background yet return instantly.
+
… because we will never know exactly when all the sounds have
+finished being loaded and decoded. All these calls will run operations
+in the background yet return instantly.
The BufferLoader utility object: useful for preloading sound and image assets
-There are different approaches for dealing with this problem. During the
-HTML5 Coding Essentials and Best Practices course, we presented utility
-functions for loading multiple images. Here we use the same approach and
-have packaged the code into an object called the BufferedLoader.
-
-[Example at JSBin that uses the BufferLoader
-utility](https://jsbin.com/javoger/edit?html,js,console,output):
+
There are different approaches for dealing with this problem. During
+the HTML5 Coding Essentials and Best Practices course, we presented
+utility functions for loading multiple images. Here we use the same
+approach and have packaged the code into an object called
+the BufferedLoader.
1. var listOfSoundSamplesURLs = [
+2. 'https://mainline.i3s.unice.fr/mooc/shoot1.mp3',
+3. 'https://mainline.i3s.unice.fr/mooc/shoot2.mp3'
4. ];
5.
6. window.onload = function init() {
@@ -6393,7 +6100,7 @@ utility](https://jsbin.com/javoger/edit?html,js,console,output):
23.
24.
25. function onSamplesDecoded(buffers){
-26. console.log("all samples loaded and decoded");
+26. console.log("all samples loaded and decoded");
27. // enables the buttons
28. shot1Normal.disabled=false;
29. shot2Normal.disabled=false;
@@ -6416,60 +6123,62 @@ utility](https://jsbin.com/javoger/edit?html,js,console,output):
46.
47. // starts loading and decoding the files
48. bufferLoader.load();
-49. }
-```
+49. }
-After the call to loadAllSoundSamples() (line 13), when all the sound
-sample files have been loaded and decoded, a callback will be initiated
-to onSamplesDecoded(decodedSamples), located at line 25. The array of
-decoded samples is the parameter of the onSamplesDecoded function.
+
After the call to loadAllSoundSamples() (line 13), when all
+the sound sample files have been loaded and decoded, a callback will be
+initiated to onSamplesDecoded(decodedSamples), located at line
+25. The array of decoded samples is the parameter of
+the onSamplesDecoded function.
-The BufferLoader utility object is created at line 45 and takes as
-parameters: 1) the audio context, 2) an array listing the URLs of the
-different audio files to be loaded and decoded, and 3) the callback
-function which is to be called once all the files have been loaded and
-decoded. This callback function should accept an array as its parameter:
-the array of decoded sound files.
+
The BufferLoader utility object is created at line 45 and
+takes as parameters: 1) the audio context, 2) an array listing the URLs
+of the different audio files to be loaded and decoded, and 3) the
+callback function which is to be called once all the files have been
+loaded and decoded. This callback function should accept an array as its
+parameter: the array of decoded sound files.
-To study the source of the BufferLoaded object, look at the JavaScript
-tab in [the example at
-JSBin](https://jsbin.com/gegita/edit?html,js,console,output).
+
To study the source of the BufferLoaded object, look at the
+JavaScript tab in the example
+at JSBin.
-Playing the two sound samples at various playback rates, repeatedly
+
Playing the two sound samples at various playback rates,
+repeatedly
-This is a variant of the previous example (picture taken with the now
+
This is a variant of the previous example (picture taken with the now
discontinued FireFox WebAudio debugger, you should get similar results
-with the Chrome WebAudio Inspector extension).
-
-[Example at
-JSBin](https://jsbin.com/zebokeg/edit?html,js,console,output):
+with the Chrome WebAudio Inspector extension).
-
+ alt="Audio Graph used in the previous example source node -> gain -> compressor -> destination."
+ style="width:40%;" />
-In this example, we added a function (borrowed and adapted from [this
-article on
-HTML5Rocks)](https://www.html5rocks.com/en/tutorials/webaudio/games/):
+
1. function makeSource(buffer) {
+2. // build graph source -> gain -> compressor -> speakers
3. // We use a compressor at the end to cut the part of the signal
4. // that would make peaks
5. // create the nodes
@@ -6488,647 +6197,587 @@ HTML5Rocks)](https://www.html5rocks.com/en/tutorials/webaudio/games/):
18. gain.connect(compressor);
19. compressor.connect(ctx.destination);
20. return source;
-21. }
-```
+21. }
-And this is the function that plays different sounds in a row,
+
And this is the function that plays different sounds in a row,
eventually creating random time intervals between them and random pitch
-variations:
+variations:
- Function playSampleRepeated code extract!
-
-```
-1. function playSampleRepeated(buffer, rounds, interval, random, random2) {
-2. if (typeof random == 'undefined') {
+ JavaScript code!
+
+
1. function playSampleRepeated(buffer, rounds, interval, random, random2) {
+2. if (typeof random == 'undefined') {
3. random = 0;
4. }
-5. if (typeof random2 == 'undefined') {
+5. if (typeof random2 == 'undefined') {
6. random2 = 0;
7. }
8.
9. var time = ctx.currentTime;
10. // Make multiple sources using the same buffer and play in quick succession.
-11. for (var i = 0; i < rounds; i++) {
+11. for (var i = 0; i < rounds; i++) {
12. var source = makeSource(buffer);
13. source.playbackRate.value = 1 + Math.random() * random2;
14. source.start(time + i * interval + Math.random() * random);
15. }
-16. }
-```
+16. }
Explanations:
-- Lines 11-15: we make a loop for building multiple routes in the
- graph. The number of routes corresponds to the number of times that
- we want the same buffer to be played. Note that
- the random2 parameter enables us to randomize the playback rate of
- the source node that corresponds to the pitch of the sound.
-
-- Line 14: this is where the sound is being played. Instead of
- calling source.start(), we call source.start(delay), this tells the
- Web Audio scheduler to play the sound after a certain time.
-
-- The makeSource function builds a graph from one decoded sample to
- the speakers. A gain is added that is also randomized in order to
- generate shot sounds with different volumes (between 0.2 and 1.2 in
- the example). A compressor node is added in order to limit the max
- intensity of the signal in case the gain makes it peak.
-
+
+
Lines 11-15: we make a loop for building multiple routes in the graph. The number of routes corresponds to the number of times that we want the same buffer to be played. Note that the random2 parameter enables us to randomize the playback rate of the source node that corresponds to the pitch of the sound.
+
Line 14: this is where the sound is being played. Instead of calling source.start(), we call source.start(delay), this tells the Web Audio scheduler to play the sound after a certain time.
+
The makeSource function builds a graph from one decoded sample to the speakers. A gain is added that is also randomized in order to generate shot sounds with different volumes (between 0.2 and 1.2 in the example). A compressor node is added in order to limit the max intensity of the signal in case the gain makes it peak.
+
-
1.5.10 Sound samples and effects
+
1.5.10 Sound samples and effects
-Any of the effects that discussed during these lectures (gain, stereo
+
Any of the effects that discussed during these lectures (gain, stereo
panner, reverb, compressor, equalizer, analyser node for visualization,
etc.) may be added to the audio graphs that we have built in our
-sound sample examples.
+sound sample examples.
-Below, we have mixed the code from two previous examples:
-
-[This one at JSBin](https://jsbin.com/vejocav/edit?html,css,js,output):
+
Below, we have mixed the code from two previous examples:
-
-
-[And this one at
-JSBin](https://jsbin.com/nazega/edit?html,js,console,output) (picture
-taken with the now discontinued FireFox WebAudio debugger, you should
-get similar results with the Chrome WebAudio Inspector extension):
+ alt="Audio player with volume meters and waveform."
+ style="width:20%;" />
+
And
+this one at JSBin (picture taken with the now discontinued FireFox
+WebAudio debugger, you should get similar results with the Chrome
+WebAudio Inspector extension):
-
-
-And here is the result ([try it at
-JSBin](https://jsbin.com/coraso/edit?html,js,console,output)):
+ alt="Multiple sound samples played at different intervals and rates."
+ style="width:50%;" />
+
-
+ alt="Sound samples and 2D visualization."
+ style="width:50%;" />
-Here is the audio graph of this example (picture taken with the now
+
Here is the audio graph of this example (picture taken with the now
discontinued FireFox WebAudio debugger, you should get similar results
-with the Chrome WebAudio Inspector extension):
-
+with the Chrome WebAudio Inspector extension):
-
-
-
-
-Look at the source code on JSBin, it's a quick merge of the two previous
-examples.
+
Look at the source code on JSBin, it’s a quick merge of the two previous examples.
-
1.5.11 Useful third party libraries
+
1.5.11 Useful third party libraries
-It's best practice to know the Web Audio API itself. Many of the
+
It’s best practice to know the Web Audio API itself. Many of the
examples demonstrated during this course may be hard to write using
-high-level libraries. However, if you don't have too many custom needs,
+high-level libraries. However, if you don’t have too many custom needs,
such libraries can make your life simpler! Also, some libraries use
sound synthesis that we did not cover in the course and are fun to use
-- for example, adding 8-bit sounds to your HTML5 game!
-
-Many JavaScript libraries have been built on top of WebAudio. We
-recommend the following:
+- for example, adding 8-bit sounds to your HTML5 game!
-- [HowlerJS](https://goldfirestudios.com/howler-js-modern-web-audio-javascript-library):
- useful for loading and playing sound sample in video games. Can
- handle audio sprites (multiple sounds in a single audio file),
- loops, spatialization. Very simple to use. Try [this very simple
- example we prepared for you at
- JsBin](https://jsbin.com/wuteqo/edit?html,js,output) that uses
- HowlerJS!
-
-- [Webaudiox, and in particular a helper built with this library,
- jsfx](https://blog.jetienne.com/blog/2014/02/27/webaudiox-jsfx/) for
- adding 8-bit procedural sounds to video games, without the need to
- load audio files. [Try the
- demo](https://jeromeetienne.github.io/webaudiox/examples/jsfx.html)!
- There is also[ a sound
- generator](https://egonelbre.com/project/jsfx/) you can try. When
- you find a sound you like, just copy and paste the parameter values
- into your code.
-
-- For writing musical applications, take a look at ToneJS !
+
Many JavaScript libraries have been built on top of WebAudio. We
+recommend the following:
+
+
HowlerJS: useful for loading and playing sound sample in video games. Can handle audio sprites (multiple sounds in a single audio file), loops, spatialization. Very simple to use. Try this very simple example we prepared for you at JsBin that uses HowlerJS!
For writing musical applications, take a look at ToneJS !
+
-
Module 2
+
Module 2 - Game Programming with HTML5
-Hi! When I was 14, I was fan of the Ramones. I bought an electric guitar
-and I played in a rock band in my school. A few years later, I wanted to become a game
-creator. This is how I became a scientist in computer engineering. During this week, I
-went back in time, because
+
Hi! When I was 14, I was fan of the Ramones. I bought an electric
+guitar and I played in a rock band in my school. A few years later, I
+wanted to become a game creator. This is how I became a scientist in
+computer engineering. During this week, I went back in time, because
-I'm going to teach you how to write HTML5 games.
+
I’m going to teach you how to write HTML5 games.
-During this week, you will greatly improve your knowledge of the HTML5
-canvas API by learning the core techniques for writing 2D games that run at 60
-frames/s.
+
During this week, you will greatly improve your knowledge of the
+HTML5 canvas API by learning the core techniques for writing 2D games
+that run at 60 frames/s.
-We’ll teach you how to create a low level game framework that provides
-all the basic blocks you need for writing a video game: game loop with time based
-animation - this is an important technique that will enable your game to run at the same
-speed on different devices.
+
We’ll teach you how to create a low level game framework that
+provides all the basic blocks you need for writing a video game: game
+loop with time based animation - this is an important technique that
+will enable your game to run at the same speed on different devices.
-We will look at richer interactions with keyboard, mouse, gamepad, and also we will learn how to animate multiple objects
-on screen.
+
We will look at richer interactions with keyboard, mouse, gamepad,
+and also we will learn how to animate multiple objects on screen.
-Not to forget learning about efficient collision detection, managing
-game states (start menu, game itself with different levels, game over at the end).
+
Not to forget learning about efficient collision detection, managing
+game states (start menu, game itself with different levels, game over at
+the end).
-We will look also at the sprite based animation technique, that consists
-in extracting small images of a character posture and by drawing the
-different sub-images.
+
We will look also at the sprite based animation technique, that
+consists in extracting small images of a character posture and by
+drawing the different sub-images.
-It gives the impression of the character moving. We will provide a
-sprite engine that will help you greatly use this technique. And finally we will look at how we
-can add music and sound effects with the Web Audio API seen during Week 1.
-
-This is one of the most funny weeks of this course so we hope you will
-share your own creations and enjoy this funny part of the course.
+
It gives the impression of the character moving. We will provide a
+sprite engine that will help you greatly use this technique. And finally
+we will look at how we can add music and sound effects with the Web
+Audio API seen during Week 1.
+
This is one of the most funny weeks of this course so we hope you
+will share your own creations and enjoy this funny part of the
+course.
-
2.2.1 History of JavaScript games
+
2.2 History of JavaScript games
-There is a widely-held belief that games running in Web browsers and
+
There is a widely-held belief that games running in Web browsers and
without the help of plugins are a relatively new phenomenon. This is not
true: back in 1998, Donkey Kong ran in a browser (screenshot below). It
was a game by Scott Porter, written using only standard Web technologies
-(HTML, JavaScript, and CSS) .
-
+(HTML, JavaScript, and CSS).
-
-
-
-
-Just a few years after the Web was born, JavaScript appeared - a simple
-script language with C-like syntax for interacting and changing the
-structure of documents - together with HTML, the [HyperText Markup
-Language](https://en.wikipedia.org/wiki/Html) used for describing text
-documents. For the first time, particular elements could be moved across
-a browser's screen. This was noticed by Scott Porter who, in 1998,
-created the first JavaScript game library with the very original name,
-'Game Lib'. At this time, Porter focused largely on creating ports of
-old NES and Atari games using animated gifs, but he also developed
-a Video Pool game in which he emulated the angle of a cue with [a sprite
-of 150 different
-positions](https://web.archive.org/web/20070104175757/http:/www.smashcat.org/arcade/pool/cue.gif)!
-
-During the late 1990s and early 2000s, JavaScript increased in
-popularity, and the community coined the term 'DHTML' ([Dynamic
-HTML](https://en.wikipedia.org/wiki/Dhtml)), which was to be the first
-umbrella term describing a collection of technologies used together to
-create interactive and animated Web sites. Developers of the DHTML era
-hadn't forgotten about Porter's 'Game Lib', and within a couple of
-years, [Brent Silby](https://def-logic.com/) presented 'Game Lib 2'. It
-is still possible to play many games created with that library on his
-Web site.
-
+
+
+
Just a few years after the Web was born, JavaScript appeared - a
+simple script language with C-like syntax for interacting and changing
+the structure of documents - together with HTML, the HyperText Markup
+Language used for describing text documents. For the first time,
+particular elements could be moved across a browser’s screen. This was
+noticed by Scott Porter who, in 1998, created the first JavaScript game
+library with the very original name, ‘Game Lib’. At this time, Porter
+focused largely on creating ports of old NES and Atari games using
+animated gifs, but he also developed a Video Pool game in which he
+emulated the angle of a cue with a sprite of 150 different positions!
+
+
During the late 1990s and early 2000s, JavaScript increased in
+popularity, and the community coined the term ‘DHTML’ (Dynamic HTML), which was
+to be the first umbrella term describing a collection of technologies
+used together to create interactive and animated Web sites. Developers
+of the DHTML era hadn’t forgotten about Porter’s ‘Game Lib’, and within
+a couple of years, Brent
+Silby presented ‘Game Lib 2’. It is still possible to play many
+games created with that library on his Web site.
-
-
-
-The DHTML era was a time when JavaScript games were as good as those
+
The DHTML era was a time when JavaScript games were as good as those
made in Flash. Developers made many DOM libraries that were useful for
-game development, such as Peter Nederlof's Beehive with its
-outstanding [Rotatrix](https://peterned.home.xs4all.nl/games.html#rotatrix) (which,
+game development, such as Peter Nederlof’s Beehive with its
+outstanding Rotatrix (which,
personally, I think is one of the best HTML games EVER). The first very
polished browser games were also developed; Jacob Sidelin, creator of
-14KB Mario (screenshot on the right), created [the very first page
-dedicated to JavaScript
-games](https://web.archive.org/web/20090519005306/http:/www.javascriptgaming.com/).
-
+14KB Mario (screenshot on the right), created the very first page dedicated to JavaScript games.
-
-
-
-And then came 2005: 'the year
-of [AJAX](https://en.wikipedia.org/wiki/Ajax_%28programming%29)'.
-Even though 'AJAX' just stands for 'Asynchronous JavaScript and XML', in
+
And then came 2005: ‘the year of AJAX’.
+Even though ‘AJAX’ just stands for ‘Asynchronous JavaScript and XML’, in
practice it was another umbrella term describing methods, trends and
-technologies used to create a new kind of Web site -[ Web
-2.0](https://en.wikipedia.org/wiki/Web_2.0).
+technologies used to create a new kind of Web site - Web 2.0.
-Popularization of new JavaScript patterns introduced the ability to
+
Popularization of new JavaScript patterns introduced the ability to
create multiplayer connections or even true emulators of old computers.
-The best examples of this time were 'Freeciv' (screenshot on the left)
-by Andreas Rosdal - a port of Sid Meier's Civilization,
-and[ Sarien.net](http://sarien.net/) by Martin Kool, an emulator of old
-Sierra games.
-
+The best examples of this time were ‘Freeciv’ (screenshot on the left)
+by Andreas Rosdal - a port of Sid Meier’s Civilization, and Sarien.net by Martin Kool, an emulator of
+old Sierra games.
-
-
-
-
-And now we are entering a new era in the history of the Web: "HTML5"!
+
And now we are entering a new era in the history of the Web: “HTML5”!
-
2.2.2 Elements and APIs useful for writing games
+
2.2.2 Elements and APIs useful for writing games
-In the W3Cx [HTML5 Coding Essentials and Best
-Practices](https://www.edx.org/course/html5-coding-essentials-and-best-practices) course,
-we study the canvas, drawing, and animation elements. These are going to
-be revisited in more details in this section.
+
In the W3Cx HTML5
+Coding Essentials and Best Practices course, we study the canvas,
+drawing, and animation elements. These are going to be revisited in more
+details in this section.
-Here, we present some elements that are useful in writing games.
+
Here, we present some elements that are useful in writing games.
Drawing: the <canvas> element
-
-
-
-
-The <canvas> is a new HTML element described as "a
+
The <canvas> is a new HTML element described as “a
resolution-dependent bitmap canvas which can be used for rendering
-graphs, game graphics, or other visual images on the fly." It's a
+graphs, game graphics, or other visual images on the fly.” It’s a
rectangle included in your page where you can draw using scripting with
JavaScript. It can, for instance, be used to draw graphs, make photo
compositions or do animations. This element comprises a drawable region
-defined in HTML code with height and width attributes.
-
-You can have multiple canvas elements on one page, even stacked one on
-top of another, like transparent layers. Each will be visible in the DOM
-tree and has it's own state independent of the others. It behaves like a
-regular DOM element.
-
-The canvas has [a rich JavaScript
-API](https://www.w3.org/TR/2dcontext/) for drawing all kinds of shapes;
-we can draw wireframe or filled shapes and set several properties such
-as color, line width, patterns, gradients, etc. It also supports
-transparency and pixel level manipulations. It is supported by all
-browsers, on desktop or mobile phones, and on most devices it will take
-advantage of hardware acceleration.
-
-It is undoubtedly the most important element in the HTML5 specification
-from a game developer's point of view, so we will discuss it in greater
-detail later in the course.
-
-The W3C [HTML Working
-Group](https://www.w3.org/html/wg/) published [HTML Canvas 2D
-Context](https://www.w3.org/TR/2dcontext/) as W3C Recommendation (i.e.,
-Web standard status).
+defined in HTML code with height and width attributes.
+
+
You can have multiple canvas elements on one page, even stacked one
+on top of another, like transparent layers. Each will be visible in the
+DOM tree and has it’s own state independent of the others. It behaves
+like a regular DOM element.
+
+
The canvas has a rich
+JavaScript API for drawing all kinds of shapes; we can draw
+wireframe or filled shapes and set several properties such as color,
+line width, patterns, gradients, etc. It also supports transparency and
+pixel level manipulations. It is supported by all browsers, on desktop
+or mobile phones, and on most devices it will take advantage of hardware
+acceleration.
+
+
It is undoubtedly the most important element in the HTML5
+specification from a game developer’s point of view, so we will discuss
+it in greater detail later in the course.
Animating at 60 fps: the requestAnimationFrame API
-The requestAnimationFrame API targets 60 frames per second animation in
-canvases. This API is quite simple and also comes with a high resolution
-timer. Animation at 60 fps is often easy to obtain with simple 2D games
-on major desktop computers. This is the preferred way to perform
-animation, as the browser will ensure that animation is not
-performed when the canvas is not visible, thus saving CPU resources.
+
The requestAnimationFrame API targets 60 frames per second animation
+in canvases. This API is quite simple and also comes with a high
+resolution timer. Animation at 60 fps is often easy to obtain with
+simple 2D games on major desktop computers. This is the preferred way to
+perform animation, as the browser will ensure that animation is not
+performed when the canvas is not visible, thus saving CPU resources.
Videos and animated textures: the <video> element
-
-
-
-
-
-
-The HTML5 <video> element was introduced in the HTML5 specification
-for the purpose of playing streamed videos or movies, partially
-replacing the object element. The JavaScript API is nearly the same
-as the one of the <audio> element and enables full control from
-JavaScript.
-
-By combining the capabilities of the <video> and <canvas> elements,
-it is possible to manipulate video data to incorporate a variety of
-visual effects in real time, and conversely, to use images from videos
-as "animated textures" over graphic objects.
+
+
+
+
The HTML5 <video> element was introduced in the HTML5
+specification for the purpose of playing streamed videos or
+movies, partially replacing the object element. The JavaScript API is
+nearly the same as the one of the <audio> element and enables full
+control from JavaScript.
+
+
By combining the capabilities of
+the <video> and <canvas> elements, it is possible to
+manipulate video data to incorporate a variety of visual effects in
+real time, and conversely, to use images from videos as “animated
+textures” over graphic objects.
Audio (streamed audio and real time sound effects): the <audio> element and the Web Audio API
-
-
-
-
+
The <audio> element
-<audio> is an HTML element that was introduced to give a consistent
-API for playing streamed sounds in browsers. File format support
-varies between browsers, but MP3 works in nearly all browsers today.
-Unfortunately, the <audio> element is only for streaming compressed
-audio, so it consumes CPU resources, and is not adapted for sound
-effects where you would like to change the playing speed or add real
-time effects such as reverberation or doppler. For this, [the Web Audio
-API](https://www.w3.org/TR/webaudio/) is preferable.
+
<audio> is an HTML element that was introduced to give a
+consistent API for playing streamed sounds in browsers. File
+format support varies between browsers, but MP3 works in nearly all
+browsers today. Unfortunately, the <audio> element is only for
+streaming compressed audio, so it consumes CPU resources, and is not
+adapted for sound effects where you would like to change the playing
+speed or add real time effects such as reverberation or doppler. For
+this, the Web Audio
+API is preferable.
The Web Audio API
-This is a 100% JavaScript API designed for working in real time with
+
This is a 100% JavaScript API designed for working in real time with
uncompressed sound samples or for generating procedural music. Sound
samples will need to be loaded into memory and decompressed prior to
being used. Up to 12 sound effects are provided natively by browsers
that support the API (all major browsers except IE, although Microsoft
-Edge supports it).
+Edge supports it).
Interacting: dealing with keyboard and mouse events, the GamePad API
-
-
-
-
-
-User inputs will rely on several APIs, some of which are well
+
+
+
User inputs will rely on several APIs, some of which are well
established, such as the DOM API that is used for keyboard, touch or
-mouse inputs. There is also a [Gamepad
-API](https://www.w3.org/TR/gamepad/) (in W3C Working Draft status) that
-is already implemented by some browsers, which we will also cover in
-this course. The Gamepad specification defines a low-level interface
-that represents gamepad devices.
+mouse inputs. There is also a Gamepad API (in W3C Working
+Draft status) that is already implemented by some browsers, which we
+will also cover in this course. The Gamepad specification defines a
+low-level interface that represents gamepad devices.
Multi-participants feature: WebSockets
-IMPORTANT INFORMATION: NOT COVERED IN THIS COURSE
+
IMPORTANT INFORMATION: NOT COVERED IN THIS COURSE
-Using [the WebSockets
-technology](https://fr.wikipedia.org/wiki/WebSocket) (which is not part
-of HTML5 but comes from the
-W3C [WebRTC](https://www.w3.org/TR/webrtc/) specification - "Real-time
-Communication Between Browsers"), you can create two-way communication
-sessions between multiple browsers and a server. The WebSocket API, and
-useful libraries built on top of it such
-as [socket.io](https://socket.io/), provide the means for sending
+
Using the
+WebSockets technology (which is not part of HTML5 but comes from the
+W3C WebRTC specification -
+“Real-time Communication Between Browsers”), you can create two-way
+communication sessions between multiple browsers and a server. The
+WebSocket API, and useful libraries built on top of it such as socket.io, provide the means for sending
messages to a server and receiving event-driven responses without having
-to poll the server for a reply.
-
+to poll the server for a reply.
-
-
-
-
+
-
2.2.3 The "game loop"
+
2.2.3 The “game loop”
-
-
-
+
-The "game loop" is the main component of any game. It separates the game
-logic and the visual layer from a user's input and actions.
+
The “game loop” is the main component of any game. It separates the
+game logic and the visual layer from a user’s input and actions.
-Traditional applications respond to user input and do nothing without
-it - word processors format text as a user types. If the user doesn't
-type anything, the word processor is waiting for an action.
+
Traditional applications respond to user input and do nothing without
+it - word processors format text as a user types. If the user doesn’t
+type anything, the word processor is waiting for an action.
-Games operate differently: a game must continue to operate regardless of
-a user's input!
+
Games operate differently: a game must continue to operate regardless
+of a user’s input!
-The game loop allows this. The game loop is computing events in our game
-all the time. Even if the user doesn’t make any action, the game will
-move the enemies, resolve collisions, play sounds and draw graphics as
-fast as possible.
+
The game loop allows this. The game loop is computing events in our
+game all the time. Even if the user doesn’t make any action, the game
+will move the enemies, resolve collisions, play sounds and draw graphics
+as fast as possible.
-Different implementations of the 'Main Game Loop'
+
Different implementations of the ‘Main Game Loop’
-There are different ways to perform animation with JavaScript. A very
+
There are different ways to perform animation with JavaScript. A very
detailed comparison of three different methods has already been
presented in the W3Cx HTML5 Coding Essentials course. Below is a quick
-reminder of the methods, illustrated with new, short, online examples.
+reminder of the methods, illustrated with new, short, online
+examples.
-Performing animation using the JavaScript setInterval(...) function
+
Performing animation using the JavaScript setInterval(…) function
-Syntax: setInterval(function, ms);
+
Syntax: setInterval(function, ms);
-The setInterval function calls a function or evaluates an expression at
-specified intervals of time (in milliseconds), and returns a unique id
-of the action. You can always stop this by calling
+
The setInterval function calls a function or evaluates an expression
+at specified intervals of time (in milliseconds), and returns a unique
+id of the action. You can always stop this by calling
the clearInterval(id) function with the interval identifier as an
-argument.
-
-[Try an example at JSBin](https://jsbin.com/qopefu/edit) : open the
-HTML, JavaScript and output tabs to see the code.
+argument.
1. var addStarToTheBody = function(){
+2. document.body.innerHTML += "*";
+3. };
4.
-5. //this will add one star to the document each 200ms (1/5s)
-6. setInterval(addStarToTheBody, 200);
-```
+5. //this will add one star to the document each 200ms (1/5s)
+6. setInterval(addStarToTheBody, 200);
-or like we did in the example, with an external function.
+
or like we did in the example, with an external function.
-Reminder from the HTML5 Coding Essentials course: with setInterval - if
-we set the number of milliseconds at, say, 200, it will call our game
+
Reminder from the HTML5 Coding Essentials course: with setInterval -
+if we set the number of milliseconds at, say, 200, it will call our game
loop function EACH 200ms, even if the previous one is not yet
finished. Because of this disadvantage, we might prefer to use
-another function, better suited to our goals.
+another function, better suited to our goals.
Using setTimeout() instead of setInterval()
-- Syntax: setTimeout(function, ms);
+
+
Syntax: setTimeout(function, ms);
+
-The setTimeout function works like setInterval but with one little
-difference: it calls your function AFTER a given amount of time.
+
The setTimeout function works like setInterval but with one little difference: it calls your function AFTER a given amount of time.
-[Try an example at JSBin](https://jsbin.com/vuvitu/edit): open the HTML,
-JavaScript and output tabs to see the code. This example does the same
-thing as the previous example by adding a "*" to the document every
-200ms.
+
Try an example at
+JSBin: open the HTML, JavaScript and output tabs to see the code. This example does the same thing as the previous example by adding a “*” to the document every 200ms.
1. var addStarToTheBody = function(){
+2. document.body.innerHTML += "*";
+3. // calls again itself AFTER 200ms
+4. setTimeout(addStarToTheBody, 200);
+5. };
6.
-7. // calls the function AFTER 200ms
-8. setTimeout(addStarToTheBody, 200);
-```
+7. // calls the function AFTER 200ms
+8. setTimeout(addStarToTheBody, 200);
-This example will work like the previous example. However, it is a
+
This example will work like the previous example. However, it is a
definite improvement, because the timer waits for the function to finish
-everything inside before calling it again.
+everything inside before calling it again.
-For several years, setTimeout was the best and most popular JavaScript
-implementation of game loops. This changed when Mozilla presented the
-requestAnimationFrame API, which became the reference W3C standard API
-for game animation.
+
For several years, setTimeout was the best and most popular
+JavaScript implementation of game loops. This changed when Mozilla
+presented the requestAnimationFrame API, which became the reference W3C
+standard API for game animation.
-Using the requestAnimationFrame API
+
Using the requestAnimationFrame API
-Note: using requestAnimationFrame was covered in detail in the W3Cx
-HTML5 Coding Essentials course.
+
Note: using requestAnimationFrame was covered in detail in the W3Cx
+HTML5 Coding Essentials course.
-When we use timeouts or intervals in our animations, the browser doesn’t
-have any information about our intentions -- do we want to repaint the
-DOM structure or a canvas during every loop? Or maybe we just want to
-make some calculations or send requests a couple of times per second?
-For this reason, it is really hard for the browser’s engine to optimize
-the loop.
+
When we use timeouts or intervals in our animations, the browser
+doesn’t have any information about our intentions – do we want to
+repaint the DOM structure or a canvas during every loop? Or maybe we
+just want to make some calculations or send requests a couple of times
+per second? For this reason, it is really hard for the browser’s engine
+to optimize the loop.
-And since we want to repaint the game (move the characters, animate
+
And since we want to repaint the game (move the characters, animate
sprites, etc.) every frame, Mozilla and other contributors/developers
-introduced a new approach which they called requestAnimationFrame.
+introduced a new approach which they called requestAnimationFrame.
-This approach helps the browser to optimize all the animations on the
+
This approach helps the browser to optimize all the animations on the
screen, no matter whether Canvas, DOM or WebGL. Also, if the animation
loop is running in a browser tab that is not currently visible, the
-browser won't keep it running.
+browser won’t keep it running.
-Basic usage, [online example at
-JSBin](https://jsbin.com/geqija/1/edit?html,js,output).
+
1. window.onload = function init() {
2. // called after the page is entirely loaded
3. requestAnimationFrame(mainloop);
4. };
5.
6. function mainloop(timestamp) {
-7. document.body.innerHTML += "*";
+7. document.body.innerHTML += "*";
8.
9. // call back itself every 60th of second
10. requestAnimationFrame(mainloop);
-11. }
-```
+11. }
-Notice that calling requestAnimationFrame(mainloop) at line 10, asks
-the browser to call the mainloop function every 16.6 ms: this
-corresponds to 60 times per second (16.6 ms = 1/60 s).
+
Notice that calling requestAnimationFrame(mainloop) at line 10, asks the browser to call the mainloop function every 16.6 ms: this corresponds to 60 times per second (16.6 ms = 1/60 s).
-This target may be hard to reach; the animation loop content may take
-longer than this, or the scheduler may be a bit early or late.
+
This target may be hard to reach; the animation loop content may take longer than this, or the scheduler may be a bit early or late.
-Many "real action games" perform what we call time-based
-animation. For this, we need an accurate timer that will tell us the
-elapsed time between each animation frame. Depending on this time, we
-can compute the distances each object on the screen must achieve in
-order to move at a given speed, independently of the CPU or GPU of the
-computer or mobile device that is running the game.
+
Many “real action games” perform what we call time-based animation. For this, we need an accurate timer that will tell us the elapsed time between each animation frame. Depending on this time, we can compute the distances each object on the screen must achieve in order to move at a given speed, independently of the CPU or GPU of the computer or mobile device that is running the game.
-The timestamp parameter of the mainloop function is useful for
-exactly that: it gives a high resolution time.
-
-We will cover this in more detail, later in the course.
+
The timestamp parameter of the mainloop function is useful for exactly that: it gives a high resolution time.
+
We will cover this in more detail, later in the course.
-
2.3.1 A game framework skeleton
+
2.3 A Simple Game Framework Skeleton
-We are going to develop a game - not all at once, let's divide the whole
-job into a series of smaller tasks. The first step is to create
-a foundation or basic structure.
+
We are going to develop a game - not all at once, let’s divide the whole job into a series of smaller tasks. The first step is to create a foundation or basic structure.
-Let's start by building the skeleton of a small game framework, based
-on [the Black Box Driven Development in
-JavaScript](https://hacks.mozilla.org/2014/08/black-box-driven-development-in-javascript/) methodology.
-In other words: a game framework skeleton is a simple object-based model
-that uses encapsulation to expose only useful methods and properties.
+
Let’s start by building the skeleton of a small game framework, based on the Black Box Driven Development in JavaScript methodology. In other words: a game framework skeleton is a simple object-based model that uses encapsulation to expose only useful methods and properties.
-We will evolve this framework throughout the lessons in this course, and
-cut it in different files once it becomes too large to fit within one
-single file.
+
We will evolve this framework throughout the lessons in this course, and cut it in different files once it becomes too large to fit within one single file.
1. var GF = function(){
2.
3. var mainLoop = function(time){
4. //Main function, called each frame
@@ -7140,97 +6789,72 @@ single file.
10. };
11.
12. // Our GameFramework returns a public API visible from outside its scope
-13. // Here we only expose the start method, under the "start" property name.
+13. // Here we only expose the start method, under the "start" property name.
14. return {
15. start: start
16. };
-17. };
-```
+17. };
-
With this skeleton, it's very easy to create a new game instance:
+
With this skeleton, it’s very easy to create a new game instance:
-```
-1. var game = new GF();
+
1. var game = new GF();
2.
-3. // Launch the game, start the animation loop, etc.
-4. game.start();
-```
+3. // Launch the game, start the animation loop, etc.
+4. game.start();
Examples
-Let's put something into the mainLoop function, and check if it works
+
Let’s put something into the mainLoop function, and check if it works
-[Try this online example at JSBin](https://jsbin.com/tatowa/edit), with
-a new mainloop: (check the JavaScript and output tabs). This page
-should display a different random number every 1/60 second. We don't
-have a real game yet, but we're improving our game engine :-)
+
Try this online example at JSBin, with a new mainloop: (check the JavaScript and output tabs). This page should display a different random number every 1/60 second. We don’t have a real game yet, but we’re improving our game engine :-)
Source code extract:
- Source code extract!
-
-```
-1. var mainLoop = function(time){
-2. // main function, called each frame
-3. document.body.innerHTML = Math.random();
+ JavaScript code!
+
+
1. var mainLoop = function(time){
+2. // main function, called each frame
+3. document.body.innerHTML = Math.random();
4.
-5. // call the animation loop every 1/60th of second
-6. requestAnimationFrame(mainLoop);
-7. };
-```
+5. // call the animation loop every 1/60th of second
+6. requestAnimationFrame(mainLoop);
+7. };
-
Let's measure that animation's frame rate
-
-Every game needs to have a function which measures the actual frame rate
-achieved by the code.
-
-The principle is simple:
+
Let’s measure that animation’s frame rate
-1. Count the time elapsed by adding deltas in the mainloop.
+
Every game needs to have a function which measures the actual frame rate achieved by the code.
-2. If the sum of the deltas is greater or equal to 1000, then 1s has
- elapsed since we started counting.
+
The principle is simple:
-3. If at the same time, we count the number of frames that have been
- drawn, then we have the frame rate - measured in number of frames
- per second. Remember, it should be around 60 fps!
-
-Quick glossary: the word delta is the name of a Greek
-letter (uppercase Δ, lowercase δ or 𝛿). The upper-case
-version is used in mathematics as an abbreviation for measuring
-the change in some object, over time - in our case, how quickly the
-mainloopis running. This dictates the maximum speed at which the game
-display will be updated. This maximum speed could be referred to as
-the rate of change. We call what is displayed at a single
-point-in-time, a frame. Thus the rate of change can be measured in
-frames per second (fps). Accordingly, our game's delta, determines
-the achievable frame rate - the shorter the delta (measured in
-mS), the faster the possible rate of change (in fps).
+
+
Count the time elapsed by adding deltas in the mainloop.
+
If the sum of the deltas is greater or equal to 1000, then 1s has elapsed since we started counting.
+
If at the same time, we count the number of frames that have been drawn, then we have the frame rate - measured in number of frames per second. Remember, it should be around 60 fps!
+
-Here is a screenshot of an example and the code we added to our game
-engine, for measuring FPS ([try it online at
-JSBin](https://jsbin.com/noqibu/edit)):
+
Quick glossary: the word delta is the name of a Greek letter (uppercase Δ, lowercase δ or 𝛿). The upper-case version is used in mathematics as an abbreviation for measuring the change in some object, over time - in our case, how quickly the mainloopis running. This dictates the maximum speed at which the game display will be updated. This maximum speed could be referred to as the rate of change. We call what is displayed at a single point-in-time, a frame. Thus the rate of change can be measured in frames per second (fps). Accordingly, our game’s delta, determines the achievable frame rate - the shorter the delta (measured in mS), the faster the possible rate of change (in fps).
+
Here is a screenshot of an example and the code we added to our game engine, for measuring FPS (try it online at JSBin):
-
-
-
+
Source code extract:
- Source code extract!
-
-```
-1. // vars for counting frames/s, used by the measureFPS function
+ JavaScript code!
+
+
1. // vars for counting frames/s, used by the measureFPS function
2. var frameCount = 0;
3. var lastTime;
4. var fpsContainer;
@@ -7244,10 +6868,10 @@ JSBin](https://jsbin.com/noqibu/edit)):
12. return;
13. }
14.
-15. // calculate the delta between last & current frame
+15. // calculate the delta between last & current frame
16. var diffTime = newTime - lastTime;
17.
-18. if (diffTime >= 1000) {
+18. if (diffTime >= 1000) {
19. fps = frameCount;
20. frameCount = 0;
21. lastTime = newTime;
@@ -7255,97 +6879,84 @@ JSBin](https://jsbin.com/noqibu/edit)):
23.
24. // and display it in an element we appended to the
25. // document in the start() function
-26. fpsContainer.innerHTML = 'FPS: ' + fps;
+26. fpsContainer.innerHTML = 'FPS: ' + fps;
27. frameCount++;
-28. };
-```
+28. };
-Now we can call the measureFPS function from inside the animation loop,
-passing it the current time, given by the high resolution timer that
-comes with the requestAnimationFrame API:
+
Now we can call the measureFPS function from inside the animation loop, passing it the current time, given by the high resolution timer that comes with the requestAnimationFrame API:
- Source code extract!
-
-```
-1. var mainLoop = function(time){
-2. // compute FPS, called each frame, uses the high resolution time parameter
-3. // given by the browser that implements the requestAnimationFrame API
-4. measureFPS(time);
+ JavaScript code!
+
+
1. var mainLoop = function(time){
+2. // compute FPS, called each frame, uses the high resolution time parameter
+3. // given by the browser that implements the requestAnimationFrame API
+4. measureFPS(time);
5.
-6. // call the animation loop every 1/60th of second
-7. requestAnimationFrame(mainLoop);
-8. };
-```
+6. // call the animation loop every 1/60th of second
+7. requestAnimationFrame(mainLoop);
+8. };
-And the <div> element used to display FPS on the screen is created in
-this example by the start() function:
+
And the <div> element used to display FPS on the screen is created in this example by the start() function:
- Source code extract!
-
-```
-1. var start = function(){
-2. // adds a div for displaying the fps value
-3. fpsContainer = document.createElement(div);
-4. document.body.appendChild(fpsContainer);
+ JavaScript code!
+
+
1. var start = function(){
+2. // adds a div for displaying the fps value
+3. fpsContainer = document.createElement(div);
+4. document.body.appendChild(fpsContainer);
5.
-6. requestAnimationFrame(mainLoop);
-7. };
-```
+6. requestAnimationFrame(mainLoop);
+7. };
-Hack: achieving more than 60 fPS? It's possible but to be avoided
-except in hackers' circles!
-We also know methods of implementing loops in JavaScript which achieve
-even more than 60fps (this is the limit using requestAnimationFrame).
+
Hack: achieving more than 60 fPS? It’s possible but to be avoided except in hackers’ circles!
-My favorite hack uses the onerror callback on an <img> element like
-this:
+
We also know methods of implementing loops in JavaScript which achieve even more than 60fps (this is the limit using requestAnimationFrame).
-
- Source code extract!
+
My favorite hack uses the onerror callback on an <img> element like this:
-```
-1. function mainloop(){
-2. var img = new Image;
+
+ JavaScript code!
+
+
-What we are doing here, is creating a new image on each frame and
+
What we are doing here, is creating a new image on each frame and
providing invalid data as a source of the image. The image cannot be
displayed properly, so the browser calls the onerror event handler that
-is the mainloop function itself, and so on.
-
-Funny right? Please [try this and check the number of FPS displayed with
-this JSBin example](https://jsbin.com/notupe/edit).
+is the mainloop function itself, and so on.
1. var mainLoop = function(){
2. // main function, called each frame
3.
4. measureFPS(+(new Date()));
@@ -7353,114 +6964,100 @@ this JSBin example](https://jsbin.com/notupe/edit).
6. // call the animation loop every LOTS of seconds using previous hack method
7. var img = new Image();
8. img.onerror = mainLoop;
-9. img.src = 'data:image/png,' + Math.random();
-10. };
-```
+9. img.src = 'data:image/png,' + Math.random();
+10. };
-
2.3.2 Introducing graphics
+
2.3.2 Introducing graphics
-[Note: drawing within a canvas is studied in detail during the [W3C
-HTML5 Coding Essentials and Best Practices
-course](https://www.edx.org/course/html5-coding-essentials-and-best-practices),
-in module 3.]
+
-Is this really a course about games? Where are the graphics?
+
Is this really a course about games? Where are the graphics?
Good news! We will add graphics to our game engine in this lesson!
- To-date we have talked of "basic concepts"; so without further ado,
-let's draw something, animate it, and move shapes around the screen :-)
+ To-date we have talked of “basic concepts”; so without further ado,
+let’s draw something, animate it, and move shapes around the screen
+:-)
-Let's do this by including into our framework the same "monster" we used
-during the [W3C HTML5 Coding Essentials and Best Practices
-course](https://www.edx.org/course/html5-coding-essentials-and-best-practices).
+
-The canvas declaration is at line 8. Use attributes to give
+
The canvas declaration is at line 8. Use attributes to give
it a width and a height, but unless you add some CSS properties, you
-will not see it on the screen because it's transparent!
-
-Let's use CSS to reveal the canvas, for example, add a 1px black border
-around it:
-
-```
-1. canvas {
-2. border: 1px solid black;
-3. }
-```
-
-And here is a reminder of best practices when using the canvas, as
-described in the HTML5 Part 1 course:
+will not see it on the screen because it’s transparent!
-1. Use a function that is called AFTER the page is fully loaded (and
- the DOM is ready), set a pointer to the canvas node in the DOM.
+
Let’s use CSS to reveal the canvas, for example, add a 1px black
+border around it:
-2. Then, get a 2D graphic context for this canvas (the context is an
- object we will use to draw on the canvas, to set global properties
- such as color, gradients, patterns and line width).
+
1. canvas {
+2. border: 1px solid black;
+3. }
-3. Only then can you can draw something,
+
And here is a reminder of best practices when using the canvas, as
+described in the HTML5 Part 1 course:
-4. Do not forget to use global variables for the canvas and context
- objects. I also recommend keeping the width and height of the canvas
- somewhere. These might be useful later.
-
-5. For each function that will change the context (color, line width,
- coordinate system, etc.), start by saving the context, and end by
- restoring it.
+
+
Use a function that is called AFTER the page is fully loaded (and the DOM is ready), set a pointer to the canvas node in the DOM.
+
Then, get a 2D graphic context for this canvas (the context is an object we will use to draw on the canvas, to set global properties such as color, gradients, patterns and line width).
+
Only then can you can draw something,
+
Do not forget to use global variables for the canvas and context objects. I also recommend keeping the width and height of the canvas somewhere. These might be useful later.
+
For each function that will change the context (color, line width, coordinate system, etc.), start by saving the context, and end by restoring it.
+
-Here is JavaScript code which implements those best practices:
+
Here is JavaScript code which implements those best practices:
- Code extract!
+ JavaScript code!
-```
-1. // useful to have them as global variables
+
1. // useful to have them as global variables
2. var canvas, ctx, w, h;
3.
4.
5. window.onload = function init() {
6. // Called AFTER the page has been loaded
-7. canvas = document.querySelector("#myCanvas");
+7. canvas = document.querySelector("#myCanvas");
8.
9. // Often useful
10. w = canvas.width;
11. h = canvas.height;
12.
13. // Important, we will draw with this object
-14. ctx = canvas.getContext('2d');
+14. ctx = canvas.getContext('2d');
15.
16. // Ready to go!
17. // Try to change the parameter values to move
@@ -7497,98 +7094,68 @@ Here is JavaScript code which implements those best practices:
48.
49. // BEST practice: restore the context
50. ctx.restore();
-51. }
-```
+51. }
-In this small example, we used the context object to draw a monster
-using the default color (black) and wireframe and filled modes:
-
-- ctx.fillRect(x, y, width, height): draws a rectangle whose top left
- corner is at (x, y) and whose size is specified by
- the width and height parameters; and both outlined by, and filled
- with, the default color.
+
In this small example, we used the context object to draw a monster
+using the default color (black) and wireframe and filled modes:
-- ctx.strokeRect(x, y, width, height): same but in wireframe mode.
-
-- Note that we use (line 30) ctx.translate(x, y) to make it easier
- to move the monster around. So, all the drawing instructions are
- coded as if the monster was in (0, 0), at the top left corner of the
- canvas (look at line 33). We draw the body outline with a
- rectangle starting from (0, 0). Calling context.translate "changes
- the coordinate system" by moving the "old (0, 0)" to (x, y) and
- keeping other coordinates in the same position relative to the
- origin.
-
-- Line 19: we call the drawMonster function with (10, 10) as
- parameters, which will cause the original coordinate system to be
- translated by (10, 10).
-
-- And if we change the coordinate system (this is what the call
- to ctx.translate(...) does) in a function, it is a best practice to
- always save the previous context at the beginning of the function
- and restore it at the end of the function (lines 27 and 50).
+
+
ctx.fillRect(x, y, width, height): draws a rectangle whose top left corner is at (x, y) and whose size is specified by the width and height parameters; and both outlined by, and filled with, the default color.
+
ctx.strokeRect(x, y, width, height): same but in wireframe mode.
+
Note that we use (line 30) ctx.translate(x, y) to make it easier to move the monster around. So, all the drawing instructions are coded as if the monster was in (0, 0), at the top left corner of the canvas (look at line 33). We draw the body outline with a rectangle starting from (0, 0). Calling context.translate “changes the coordinate system” by moving the “old (0, 0)” to (x, y) and keeping other coordinates in the same position relative to the origin.
+
Line 19: we call the drawMonster function with (10, 10) as parameters, which will cause the original coordinate system to be translated by (10, 10).
+
And if we change the coordinate system (this is what the call to ctx.translate(…) does) in a function, it is a best practice to always save the previous context at the beginning of the function and restore it at the end of the function (lines 27 and 50).
+
Animating the monster and including it in our game engine
-Ok, now that we know how to move the monster, let's integrate it into
-our game engine:
-
-1. add the canvas to the HTML page,
-
-2. add the content of the init() function to the start() function of
- the game engine,
-
-3. add a few global variables (canvas, ctx, etc.),
+
Ok, now that we know how to move the monster, let’s integrate it into our game engine:
-4. call the drawMonster(...) function from the mainLoop,
-
-5. add a random displacement to the x, y position of the monster to see
- it moving,
-
-6. in the main loop, do not forget to clear the canvas before drawing
- again; this is done using the ctx.clearRect(x, y, width,
- height) function.
-
-[You can try this version online at
-JSBin](https://jsbin.com/xuruja/edit).
+
+
add the canvas to the HTML page,
+
add the content of the init() function to the start() function of the game engine,
+
add a few global variables (canvas, ctx, etc.),
+
call the drawMonster(…) function from the mainLoop,
+
add a random displacement to the x, y position of the monster to see it moving,
+
in the main loop, do not forget to clear the canvas before drawing again; this is done using the ctx.clearRect(x, y, width, height) function.
1. // Inits
2. window.onload = function init() {
3. var game = new GF();
4. game.start();
@@ -7635,14 +7202,14 @@ JSBin](https://jsbin.com/xuruja/edit).
45. ...
46.
47. // Canvas, context etc.
-48. canvas = document.querySelector("#myCanvas");
+48. canvas = document.querySelector("#myCanvas");
49.
50. // often useful
51. w = canvas.width;
52. h = canvas.height;
53.
54. // important, we will draw with this object
-55. ctx = canvas.getContext('2d');
+55. ctx = canvas.getContext('2d');
56.
57. // Start the animation
58. requestAnimationFrame(mainLoop);
@@ -7652,245 +7219,221 @@ JSBin](https://jsbin.com/xuruja/edit).
62. return {
63. start: start
64. };
-65. };
-```
+65. };
Explanations:
-- Note that we now start the game engine in a window.onload callback
- (line 2), so only after the page has been loaded.
-
-- We also moved 99% of the init() method from the previous example
- into the start() method of the game engine, and added the canvas,
- ctx, w, h variables as global variables to the game framework
- object.
-
-- Finally, in the main loop we added a call to
- the drawMonster() function, injecting randomicity through the
- parameters: the monster is drawn with an x,y offset of between 0
- and 10, in successive frames of the animation.
-
-- And we clear the previous canvas content before drawing the current
- frame (line 35).
-
-If you try the example, you will see a trembling monster. The canvas is
-cleared and the monster drawn in random positions, at around 60 times
-per second!
+
+
Note that we now start the game engine in a window.onload callback
+ (line 2), so only after the page has been loaded.
+
We also moved 99% of the init() method from the previous example into the
+ start() method of the game engine, and added the canvas, ctx, w, h variables as
+ global variables to the game framework object.
+
Finally, in the main loop we added a call to the drawMonster() function,
+ injecting randomicity through the parameters: the monster is drawn with an x,y
+ offset of between 0 and 10, in successive frames of the animation.
+
And we clear the previous canvas content before drawing the current frame
+ (line 35
+
-Next, let's see how to interact with it using the mouse or the keyboard.
+
If you try the example, you will see a trembling monster. The canvas is cleared
+and the monster drawn in random positions, at around 60 times per second!
+
Next, let’s see how to interact with it using the mouse or the keyboard.
-
2.3.3 User interaction and event handling
+
2.3.3 User interaction and event handling
-
Input & output: how do events work in Web apps & games?
+
Input & output: how do events work in Web apps & games?
HTML5 events
-There is no input or output in JavaScript. We treat events caused by
-user actions as inputs, and we manipulate the DOM structure as
-output. Usually in games, we will maintain state variables representing
-moving objects like the position and speed of an alien ship, and the
-animation loop will refer to these variables in determining the movement
-of such objects.
+
There is no input or output in JavaScript. We treat events caused by user
+actions as inputs, and we manipulate the DOM structure as output. Usually in games,
+we will maintain state variables representing moving objects like the position and
+speed of an alien ship, and the animation loop will refer to these variables in
+determining the movement of such objects.
-In any case, the events are called DOM events, and we use the DOM APIs
-to create event handlers.
+
In any case, the events are called DOM events, and we use the DOM APIs to create
+event handlers.
How to listen to events
-There are three ways to manage events in the DOM structure. You could
-attach an event inline in your HTML code like this:
+
There are three ways to manage events in the DOM structure. You could attach an
+event inline in your HTML code like this:
Method #1: declare an event handler in the HTML code
-```
-
content of the
-div
-```
+
1. <div id="someDiv" onclick="alert('clicked!')"> content of the
+2. div </div>
-This method is very easy to use, but it is not the recommended way to
-handle events. Indeed, It works today but is deprecated (will probably
-be abandoned in the future). Mixing 'visual layer' (HTML) and 'logic
-layer' (JavaScript) in one place is really bad practice and causes a
-host of problems during development.
+
This method is very easy to use, but it is not the recommended way to handle events.
+Indeed, It works today but is deprecated (will probably be abandoned in the
+future). Mixing ‘visual layer’ (HTML) and ‘logic layer’ (JavaScript) in one place
+is really bad practice and causes a host of problems during development.
Method #2: attach an event handler to an HTML element in JavaScript
-Note that the third parameter describes whether the callback has to be
-called during the captured phase. This is not important for now, just
-set it to false.
+
Note that the third parameter describes whether
+the callback has to be called during the captured phase. This is not important
+for now, just set it to false.
-
Details of the DOM event are passed to the event listener function
+
Details of the DOM event are passed to the event listener function
-When you create an event listener and attach it to an element, the
-listener will create an event object to describe what happened. This
-object is provided as a parameter of the callback function:
+
When you create an event listener and attach it to an element,
+the listener will create an event object to describe what
+happened. This object is provided as a parameter of the callback
+function:
-```
-element.addEventListener('click', function(event) {
- / now you can use event object inside the callback /
-}, false);
-```
+
1. element.addEventListener('click', function(event) {
+2. / now you can use event object inside the callback /
+3. }, false);
-Depending on the type of event you are listening to, you will consult
+
Depending on the type of event you are listening to, you will consult
different properties from the event object in order to obtain useful
-information such as: "which keys are pressed down?", "what is the
-location of the mouse cursor?", "which mouse button has been clicked?",
-etc.
+information such as: “which keys are pressed down?”, “what is the
+location of the mouse cursor?”, “which mouse button has been clicked?”,
+etc.
-In the following lessons, we will remind you now how to deal with the
+
In the following lessons, we will remind you now how to deal with the
keyboard and the mouse (previously covered during the HTML5 Part 1
course) in the context of a game engine (in particular, how to manage
multiple events at the same time), and also demonstrate how you can
-accept input from a game pad using the new Gamepad API.
+accept input from a game pad using the new Gamepad API.
Further reading
-In the method #1 above, we mentioned that "Mixing 'visual layer' (HTML)
-and 'logic layer' (JavaScript) ... bad practice", and this is similarly
-reflected in many style features being deprecated in HTML5 and moved
-into CSS3. The management philosophy at play here is called "the
-separation of concerns" and applies in several ways to software
+
In the method #1 above, we mentioned that “Mixing ‘visual layer’
+(HTML) and ‘logic layer’ (JavaScript) … bad practice”, and this is
+similarly reflected in many style features being deprecated in HTML5 and
+moved into CSS3. The management philosophy at play here is called “the
+separation of concerns” and applies in several ways to software
development - at the code level, through to the management of staff.
-It's not part of the course, but professionals may find the following
-references useful:
-
-- [Separation of concerns - Wikipedia, the free
- encyclopedia](https://en.wikipedia.org/wiki/Separation_of_concerns)
+It’s not part of the course, but professionals may find the following
+references useful:
-This has been something of a nightmare for years, as different browsers
-had different ways of handling key events and key codes ([read this if
-you are fond of JavaScript
-archaeology](https://unixpapa.com/js/key.html)). Fortunately, it's much
-improved today and we can rely on methods that should work in any
-browser less than four years old.
+
This has been something of a nightmare for years, as different browsers had
+different ways of handling key events and key codes (read this if you are fond of JavaScript
+archaeology). Fortunately, it’s much improved today and we can rely on methods
+that should work in any browser less than four years old.
-After a keyboard-related event (eg keydown or keyup), the code of the
+
After a keyboard-related event (eg keydown or keyup), the code of the
key that fired the event will be passed to the listener function. It is
-possible to test which key has been pressed or released, like this:
+possible to test which key has been pressed or released, like this:
1. window.addEventListener('keydown', function(event) {
+2. if (event.keyCode === 37) {
+3. // Left arrow was pressed
+4. }
+5. }, false);
-At line 2, the key code of 37 corresponds to the left arrow key.
-
-You can try key codes [with this interactive
-example](http://www.asquare.net/javascript/tests/KeyCode.html), and here
-is a list of keyCodes (from [CSS
-Tricks](https://css-tricks.com/snippets/javascript/javascript-keycodes/)
+
At line 2, the key code of 37 corresponds to the left arrow key.
Game requirements: managing multiple keypress / keyrelease events
-In a game, we often need to check which keys are being used, at a very
-high frequency - typically from inside the game loop that is looping at
-up to 60 times per second.
+
In a game, we often need to check which keys are being used, at a
+very high frequency - typically from inside the game loop that is
+looping at up to 60 times per second.
-If a spaceship is moving left, chances are you are keeping the left
-arrow down, and if it's firing missiles at the same time you must also
+
If a spaceship is moving left, chances are you are keeping the left
+arrow down, and if it’s firing missiles at the same time you must also
be pressing the space bar like a maniac, and maybe pressing the shift
-key to release smart bombs.
+key to release smart bombs.
-Sometimes these three keys might be down at the same time, and the game
-loop will have to take these three keys into account: move the ship
+
Sometimes these three keys might be down at the same time, and the
+game loop will have to take these three keys into account: move the ship
left, release a new missile if the previous one is out of the screen or
-if it reached a target, launch a smart bomb if conditions are met, etc.
+if it reached a target, launch a smart bomb if conditions are met,
+etc.
Keep the list of pertinent keys in a JavaScript object
-The typical method used is: store the list of the keys (or mouse button
-or whatever game pad button...) that are up or down at a given time in a
-JavaScript object. For our small game engine we will call this object
-"inputStates".
+
The typical method used is: store the list of the keys (or mouse
+button or whatever game pad button…) that are up or down at a given
+time in a JavaScript object. For our small game engine we will call this
+object “inputStates”.
-We will update its content inside the different input event listeners,
-and later check its values inside the game loop to make the game react
-accordingly.
+
We will update its content inside the different input event
+listeners, and later check its values inside the game loop to make the
+game react accordingly.
Add this to our game framework:
-These are the changes to our small game engine prototype (which is
-far from finished yet):
-
-1. We add an empty inputStates object as a global property of the game
- engine,
+
These are the changes to our small game engine prototype (which is
+far from finished yet):
-2. In the start() method, we add event listeners for
- each keydown and keyup event which controls the game.
-
-3. In each listener, we test if an arrow key or the space bar has been
- pressed or released, and we set the properties of the inputStates
- object accordingly. For example, if the space bar is pressed, we
- set inputStates.space=true; but if it's released, we reset
- to inputStates.space=false.
-
-4. In the main loop (to prove everything is working), we add tests to
- check which keys are down; and if a key is down, we print its name
- on the canvas.
-
-[Here is the online example you can try at
-JSBin](https://jsbin.com/razeya/edit)
+
+
We add an empty inputStates object as a global property of the game engine,
+
In the start() method, we add event listeners for each keydown and keyup event which
+ controls the game.
+
In each listener, we test if an arrow key or the space bar has been pressed or
+ released, and we set the properties of the inputStates object accordingly. For example,
+ if the space bar is pressed, we set inputStates.space=true; but if it’s released, we reset to
+ inputStates.space=false.
+
In the main loop (to prove everything is working), we add tests to check which keys
+ are down; and if a key is down, we print its name on the canvas.
1. // Inits
2. window.onload = function init() {
3. var game = new GF();
4. game.start();
@@ -7930,19 +7473,19 @@ JSBin](https://jsbin.com/razeya/edit)
38.
39. // check inputStates
40. if (inputStates.left) {
-41. ctx.fillText("left", 150, 20);
+41. ctx.fillText("left", 150, 20);
42. }
43. if (inputStates.up) {
-44. ctx.fillText("up", 150, 50);
+44. ctx.fillText("up", 150, 50);
45. }
46. if (inputStates.right) {
-47. ctx.fillText("right", 150, 80);
+47. ctx.fillText("right", 150, 80);
48. }
49. if (inputStates.down) {
-50. ctx.fillText("down", 150, 120);
+50. ctx.fillText("down", 150, 120);
51. }
52. if (inputStates.space) {
-53. ctx.fillText("space bar", 140, 150);
+53. ctx.fillText("space bar", 140, 150);
54. }
55.
56. // Calls the animation loop every 1/60th of second
@@ -7952,12 +7495,12 @@ JSBin](https://jsbin.com/razeya/edit)
60. var start = function(){
61. ...
62. // Important, we will draw with this object
-63. ctx = canvas.getContext('2d');
+63. ctx = canvas.getContext('2d');
64. // Default police for text
-65. ctx.font="20px Arial";
+65. ctx.font="20px Arial";
66.
67. // Add the listener to the main, window object, and update the states
-68. window.addEventListener('keydown', function(event){
+68. window.addEventListener('keydown', function(event){
69. if (event.keyCode === 37) {
70. inputStates.left = true;
71. } else if (event.keyCode === 38) {
@@ -7972,7 +7515,7 @@ JSBin](https://jsbin.com/razeya/edit)
80. }, false);
81.
82. // If the key is released, change the states object
-83. window.addEventListener('keyup', function(event){
+83. window.addEventListener('keyup', function(event){
84. if (event.keyCode === 37) {
85. inputStates.left = false;
86. } else if (event.keyCode === 38) {
@@ -7995,107 +7538,103 @@ JSBin](https://jsbin.com/razeya/edit)
103. return {
104. start: start
105. };
-106. };
-```
+106. };
-You may notice that on some computers / operating systems, it is not
+
You may notice that on some computers / operating systems, it is not
possible to simultaneously press the up and down arrow keys, or left and
right arrow keys, because they are mutually exclusive. However space +
-up + right should work in combination.
-
+up + right should work in combination.
-
2.3.5 Adding mouse listeners
+
2.3.5 Adding mouse listeners
-
A few reminders
-
-
-
-
-
+
+
-Working with mouse events requires detecting whether a mouse button is
-up or down, identifying that button, keeping track of mouse movement,
-getting the x and y coordinates of the cursor, etc.
+
Working with mouse events requires detecting whether a mouse button
+is up or down, identifying that button, keeping track of mouse movement,
+getting the x and y coordinates of the cursor, etc.
-Special care must be taken when acquiring mouse coordinates because the
-HTML5 canvas has default (or directed) CSS properties which could
+
Special care must be taken when acquiring mouse coordinates because
+the HTML5 canvas has default (or directed) CSS properties which could
produce false coordinates. The trick to get the right x and y mouse
-cursor coordinates is to use this method from the canvas API:
+cursor coordinates is to use this method from the canvas API:
-// necessary to take into account CSS boundaries
+
// necessary to take into account CSS boundaries
-var rect = canvas.getBoundingClientRect();
+
var rect = canvas.getBoundingClientRect();
-The width and height of the rect object must be taken into account.
+
The width and height of the rect object must be taken into account.
These dimensions correspond to the padding / margins / borders of the
canvas. See how we deal with them in the getMousePos() function in the
-next example.
-
-Here is [an online example at JSBin](https://jsbin.com/metavu/edit) that
-covers all cases correctly.
+next example.
+
+
-Move the mouse over the canvas and press or release mouse buttons.
+
Move the mouse over the canvas and press or release mouse buttons.
Notice that we keep the state of the mouse (position, buttons up or
down) as part of the inputStates object, just as we do with the keyboard
-(per previous lesson).
+(per previous lesson).
Below is the JavaScript source code for this small example:
1. var canvas, ctx, width, height;
2. var rect = {x:40, y:40, rayon: 30, width:80, height:80, v:1};
3. var mousepos = {x:0, y:0};
4.
5. function init() {
-6. canvas = document.querySelector("#myCanvas");
-7. ctx = canvas.getContext('2d');
+6. canvas = document.querySelector("#myCanvas");
+7. ctx = canvas.getContext('2d');
8. width = canvas.width;
9. height = canvas.height;
10.
-11. canvas.addEventListener('mousemove', function (evt) {
+11. canvas.addEventListener('mousemove', function (evt) {
12. mousepos = getMousePos(canvas, evt);
13. }, false);
14.
@@ -8188,53 +7726,47 @@ down) as part of the inputStates object, just as we do with the keyboard
56. x: evt.clientX - rect.left,
57. y: evt.clientY - rect.top
58. };
-59. }
-```
+59. }
Explanations:
-- Line 25 calculates the angle between mouse cursor and the
- rectangle,
-
-- Lines 27-28move the rectangle v pixels along a line between the
- rectangle's current position and the mouse cursor,
-
-- Lines 41-46translate the rectangle, rotate it, and recenter
- the rotational point to the center of the rectangle (in its new
- position).
+
+
Line 25 calculates the angle between mouse cursor and the rectangle,
+
Lines 27-28move the rectangle v pixels along a line between the rectangle’s current position and the mouse cursor,
+
Lines 41-46translate the rectangle, rotate it, and recenter the rotational point to the center of the rectangle (in its new position).
+
Adding mouse listeners to the game framework
-Now we will include these listeners into our game framework. Notice that
-we changed some parameters (no need to pass the canvas as a parameter of
-the getMousePos() function, for example).
-
-[The new online version of the game engine can be tried at
-JSBin](https://jsbin.com/rizuyah/edit):
+
Now we will include these listeners into our game framework. Notice
+that we changed some parameters (no need to pass the canvas as a
+parameter of the getMousePos() function, for example).
+
+
-Try pressing arrows and space keys, moving the mouse, and pressing the
-buttons, all at the same time. You'll see that the game framework
+
Try pressing arrows and space keys, moving the mouse, and pressing
+the buttons, all at the same time. You’ll see that the game framework
handles all these events simultaneously because the global variable
named inputStates is updated by keyboard and mouse events, and consulted
-to direct movements every 1/60th second.
+to direct movements every 1/60th second.
1. // Inits
2. window.onload = function init() {
3. var game = new GF();
4. game.start();
@@ -8273,26 +7805,26 @@ to direct movements every 1/60th second.
37. drawMyMonster(10+Math.random()*10, 10+Math.random()*10);
38. // Checks inputStates
39. if (inputStates.left) {
-40. ctx.fillText("left", 150, 20);
+40. ctx.fillText("left", 150, 20);
41. }
42. if (inputStates.up) {
-43. ctx.fillText("up", 150, 40);
+43. ctx.fillText("up", 150, 40);
44. }
45. if (inputStates.right) {
-46. ctx.fillText("right", 150, 60);
+46. ctx.fillText("right", 150, 60);
47. }
48. if (inputStates.down) {
-49. ctx.fillText("down", 150, 80);
+49. ctx.fillText("down", 150, 80);
50. }
51. if (inputStates.space) {
-52. ctx.fillText("space bar", 140, 100);
+52. ctx.fillText("space bar", 140, 100);
53. }
54. if (inputStates.mousePos) {
-55. ctx.fillText("x = " + inputStates.mousePos.x + " y = " +
+55. ctx.fillText("x = " + inputStates.mousePos.x + " y = " +
56. inputStates.mousePos.y, 5, 150);
57. }
58. if (inputStates.mousedown) {
-59. ctx.fillText("mousedown b" + inputStates.mouseButton, 5, 180);
+59. ctx.fillText("mousedown b" + inputStates.mouseButton, 5, 180);
60. }
61.
62. // Calls the animation loop every 1/60th of second
@@ -8312,7 +7844,7 @@ to direct movements every 1/60th second.
76. var start = function(){
77. ...
78. // Adds the listener to the main window object, and updates the states
-79. window.addEventListener('keydown', function(event){
+79. window.addEventListener('keydown', function(event){
80. if (event.keyCode === 37) {
81. inputStates.left = true;
82. } else if (event.keyCode === 38) {
@@ -8327,7 +7859,7 @@ to direct movements every 1/60th second.
91. }, false);
92.
93. // If the key is released, changes the states object
-94. window.addEventListener('keyup', function(event){
+94. window.addEventListener('keyup', function(event){
95. if (event.keyCode === 37) {
96. inputStates.left = false;
97. } else if (event.keyCode === 38) {
@@ -8342,16 +7874,16 @@ to direct movements every 1/60th second.
106. }, false);
107.
108. // Mouse event listeners
-109. canvas.addEventListener('mousemove', function (evt) {
+109. canvas.addEventListener('mousemove', function (evt) {
110. inputStates.mousePos = getMousePos(evt);
111. }, false);
112.
-113. canvas.addEventListener('mousedown', function (evt) {
+113. canvas.addEventListener('mousedown', function (evt) {
114. inputStates.mousedown = true;
115. inputStates.mouseButton = evt.button;
116. }, false);
117.
-118. canvas.addEventListener('mouseup', function (evt) {
+118. canvas.addEventListener('mouseup', function (evt) {
119. inputStates.mousedown = false;
120. }, false);
121.
@@ -8364,211 +7896,214 @@ to direct movements every 1/60th second.
128. return {
129. start: start
130. };
-131. };
-```
+131. };
-
2.3.6 Gamepad events
+
2.3.6 Gamepad events
-Hi! In this lesson we will look at how we can manage such an input
-device! This is a Microsoft xbox 360 controller -a wired one- with an USB plug. And we
-will see how we can use the gamepad API that is available on modern browsers... except a few
-ones...
-
-The first thing you can do is to add some even listeners for the gamepadconnect
-and gamepaddisconnected events.
+
Hi! In this lesson we will look at how we can manage such an input
+device! This is a Microsoft xbox 360 controller -a wired one- with an
+USB plug. And we will see how we can use the gamepad API that is
+available on modern browsers… except a few ones…
-If I plug in the game pad (I’m using Google Chrome for this demo), I plug it in...
+
The first thing you can do is to add some even listeners for the
+gamepadconnect and gamepaddisconnected events.
-Here we are! I need to press a button, for the gamepad to be detected.
-If I just plug it in: it won't be detected. On FireFox, I tried too... and it has been
-detected as soon as I plugged it in.
+
If I plug in the game pad (I’m using Google Chrome for this demo), I
+plug it in…
-Once it's plugged, you can get a property of the event that is called gamepad... and you can get the number of buttons, and the number of axes.
+
Here we are! I need to press a button, for the gamepad to be
+detected. If I just plug it in: it won’t be detected. On FireFox, I
+tried too… and it has been detected as soon as I plugged it in.
-Here, it says it's got 4 axes... the axes are for the joysticks... horizontal and vertical axes. We will see how to manage that in a minute. And it's got 17 buttons.
+
Once it’s plugged, you can get a property of the event that is called
+gamepad… and you can get the number of buttons, and the number of
+axes.
-We can also detect when we disconnect it. So... I just unpluged it and it's also detected...
+
Here, it says it’s got 4 axes… the axes are for the joysticks…
+horizontal and vertical axes. We will see how to manage that in a
+minute. And it’s got 17 buttons.
-But, in order to scan... in order to know in real time the state of the different buttons...
+
We can also detect when we disconnect it. So… I just unpluged it and
+it’s also detected…
-and you've got some analogic buttons like these triggers... and you've
-got axes... they are the joysticks here... and buttons... you need to scan at a very fast
-frequency the state of the gamepad.
+
But, in order to scan… in order to know in real time the state of the
+different buttons…
-This is done in another example here, where I can press some buttons and you see that the buttons are detected.
+
and you’ve got some analogic buttons like these triggers… and you’ve
+got axes… they are the joysticks here… and buttons… you need to scan at
+a very fast frequency the state of the gamepad.
-And in case of an analogic button, I'm using a progress HTML element to draw/show the pressure... So, how do you manage these values?
+
This is done in another example here, where I can press some buttons
+and you see that the buttons are detected.
-You've got to have a mainloop that is very similar to the animation loop
-(or you can do this in the animation loop). And we call a method, a function called
-scanGamepads that will ask for the gamepad 60 times per second. Here we say "Navigator!
+
And in case of an analogic button, I’m using a progress HTML element
+to draw/show the pressure… So, how do you manage these values?
-Hey browser! Give me all the gamepads you've got!" And you've got a gamepad array... if
-the gamepad is detected, it's non null and you can use it. In this example we use just one
-gamepad.
+
You’ve got to have a mainloop that is very similar to the animation
+loop (or you can do this in the animation loop). And we call a method, a
+function called scanGamepads that will ask for the gamepad 60 times per
+second. Here we say “Navigator!
-The first gamepad that is defined will be used for setting the "gamepad" global
-variable. This is the variable we check in the loop: "please, give me an updated status of the
-gamepad!", by calling the scanGamepad()... then we're going to check the buttons that
-are pressed... so how do we check the buttons?
+
Hey browser! Give me all the gamepads you’ve got!” And you’ve got a
+gamepad array… if the gamepad is detected, it’s non null and you can use
+it. In this example we use just one gamepad.
-We get the number of buttons:
-gamepad.buttons,
-we do an iteration on them, we get the current button and check if its pressed or not.
+
The first gamepad that is defined will be used for setting the
+“gamepad” global variable. This is the variable we check in the loop:
+“please, give me an updated status of the gamepad!”, by calling the
+scanGamepad()… then we’re going to check the buttons that are pressed…
+so how do we check the buttons?
-This is a boolean property: "pressed".
+
We get the number of buttons: gamepad.buttons, we do an iteration on
+them, we get the current button and check if its pressed or not.
-And in the case there is a "value" that is defined, it means it's for an
-analogic buttons, like the triggers here... and the value will be between 0
-and 1. And this is what we draw here.
+
This is a boolean property: “pressed”.
-If you want to try another demo and look at the code for managing multiple gamepads, I added a link to this demo that has been done by people from Mozilla...
+
And in the case there is a “value” that is defined, it means it’s for
+an analogic buttons, like the triggers here… and the value will be
+between 0 and 1. And this is what we draw here.
-If you plug a second gamepad (I’ve got only one here), it will display another row for checking the state of the second gamepad.
+
If you want to try another demo and look at the code for managing
+multiple gamepads, I added a link to this demo that has been done by
+people from Mozilla…
-Another thing that is interesting is to detect the joystick values here... you
-can see the progress bars moving. The joystick returns values between -1 and +1, 0 is the
-neutral position here.
+
If you plug a second gamepad (I’ve got only one here), it will
+display another row for checking the state of the second gamepad.
-The way you detect that is that instead of doing an iteration on the
-buttons: you do an iteration on the axes... the checkAxes function proposed in the
-course will just iterate on the axes array you get from the gamepad object.
+
Another thing that is interesting is to detect the joystick values
+here… you can see the progress bars moving. The joystick returns values
+between -1 and +1, 0 is the neutral position here.
-gamepad.axes[i] here will returns the status... the value of the current axis.
+
The way you detect that is that instead of doing an iteration on the
+buttons: you do an iteration on the axes… the checkAxes function
+proposed in the course will just iterate on the axes array you get from
+the gamepad object.
-axes[0] means horizontal here, axes[1] means vertical for the left joystick,
-axes[2] will mean left/right for the second joystick and axes[3] for the up/down.
+
gamepad.axes[i] here will returns the status… the value of the
+current axis.
-This is how we manage that. Look at the code, it's very simple.
+
axes[0] means horizontal here, axes[1] means vertical for the left
+joystick, axes[2] will mean left/right for the second joystick and
+axes[3] for the up/down.
-And in the course you will see how we can make the small monster move using the gamepad.
+
This is how we manage that. Look at the code, it’s very simple.
-It's, I think, in the next lesson... I gave an example at the end, for moving the monster
-with the gamepad. We just reused the functions I've shown.
+
And in the course you will see how we can make the small monster move
+using the gamepad.
-And here we can make the monster move using the gamepad as you can see... with the left joystick.
+
It’s, I think, in the next lesson… I gave an example at the end, for
+moving the monster with the gamepad. We just reused the functions I’ve
+shown.
-We just added scanGamepads() in the mailoop...updateGamepadStatus()...
+
And here we can make the monster move using the gamepad as you can
+see… with the left joystick.
-and updateGamepadStatus() will scan the gamepads, check the buttons, and
-check the axes 60 times per second... the rest of the code is the same as I presented earlier.
-So I hope you enjoyed this part of the course and that you will use a gamepad in the
-small game you are going to develop during this week. Bye! Bye!
-
-Some games, mainly arcade/action games, are designed to be used with a gamepad:
+
We just added scanGamepads() in the mailoop…updateGamepadStatus()…and
+updateGamepadStatus() will scan the gamepads, check the buttons, and check the
+axes 60 times per second… the rest of the code is the same as I presented
+earlier. So I hope you enjoyed this part of the course and that you will use a
+gamepad in the small game you are going to develop during this week. Bye! Bye!
+
Some games, mainly arcade/action games, are designed to be used with
+a gamepad:
-[The Gamepad API ](https://w3c.github.io/gamepad/)is currently
-supported by all major browsers (including Microsoft Edge), except
-Internet Explorer - see the [up to date version of this feature's
-compatbility table](https://caniuse.com/#feat=gamepad). Note that the
-API is still a draft and may change in the future. We recommend using a
-Wired Xbox 360 Controller or a PS2 controller, both of which
-should work out of the box on Windows XP, Windows Vista, Windows, and
-Linux desktop distributions. Wireless controllers are supposed to work
-too, but we haven't tested the API with them. You may find someone who
-has managed but they've probably needed to install an operating
-system driver to make it work.
+
We recommend using a Wired Xbox 360 Controller or a PS2 controller, both of which should
+work out of the box on Windows XP, Windows Vista, Windows, and Linux desktop
+distributions. Wireless controllers are supposed to work too, but we haven’t tested
+the API with them. You may find someone who has managed but they’ve probably needed
+to install an operating system driver to make it work.
Detecting gamepads
Events triggered when the gamepad is plugged in or unplugged
-Let's start with a 'discovery' script to check that the GamePad is
+
Let’s start with a ‘discovery’ script to check that the GamePad is
connected, and to see the range of facilities it can offer to
-JavaScript.
+JavaScript.
-If the user interacts with a controller (presses a button, moves a
-stick)
-a [gamepadconnected](https://w3c.github.io/gamepad/#event-gamepadconnected) event
-will be sent to the page. NB the page must be visible! The event object
-passed to the gamepadconnected listener has a gamepad property which
-describes the connected device.
+
If the user interacts with a controller (presses a button, moves a
+stick) a gamepadconnected event will be sent to the page. NB the page must be visible! The event object passed to the gamepadconnected listener has a gamepad property which describes the connected device.
-[Example on JSBin](https://jsbin.com/kiduwu/edit?console,output)
+
1. window.addEventListener("gamepadconnected", function(e) {
2. var gamepad = e.gamepad;
3. var index = gamepad.index;
4. var id = gamepad.id;
5. var nbButtons = gamepad.buttons.length;
6. var nbAxes = gamepad.axes.length;
7.
-8. console.log("Gamepad No " + index +
-9. ", with id " + id + " is connected. It has " +
-10. nbButtons + " buttons and " +
-11. nbAxes + " axes");
-12. });
-```
+8. console.log("Gamepad No " + index +
+9. ", with id " + id + " is connected. It has " +
+10. nbButtons + " buttons and " +
+11. nbAxes + " axes");
+12. });
-
-
-
-
-
-
-
+
+
-If a gamepad is disconnected (you unplug it),
-a [gamepaddisconnected](https://w3c.github.io/gamepad/#event-gamepaddisconnected) event
-is fired. Any references to the gamepad object will have
-their connected property set to false.
+
If a gamepad is disconnected (you unplug it), a gamepaddisconnected event is fired. Any references to the gamepad object will have their connected property set to false.
1. window.addEventListener("gamepaddisconnected", function(e) {
2. var gamepad = e.gamepad;
3. var index = gamepad.index;
4.
-5. console.log("Gamepad No " + index + " has been disconnected");
-6. });
-```
-
+5. console.log("Gamepad No " + index + " has been disconnected");
+6. });
-
-
-
-
+
+
Scanning for gamepads
-If you reload the page, and if the gamepad has already been detected by
-the browser, it will not fire the gamepadconnected event again. This can
-be problematic if you use a global variable for managing the gamepad, or
-an array of gamepads in your code. As the event is not fired, these
-variables will stay undefined...
+
If you reload the page, and if the gamepad has already been detected
+by the browser, it will not fire the gamepadconnected event again. This
+can be problematic if you use a global variable for managing the
+gamepad, or an array of gamepads in your code. As the event is not
+fired, these variables will stay undefined…
-So, you need to regularly scan for gamepads available on the system. You
-should still use that event listener if you want to do something special
-when the system detects that a gamepad has been unplugged.
+
So, you need to regularly scan for gamepads available on the system.
+You should still use that event listener if you want to do something
+special when the system detects that a gamepad has been unplugged.
1. var gamepad;
2.
3. function mainloop() {
4. ...
@@ -8581,115 +8116,105 @@ Here is the code to use to scan for a gamepad:
11.
12. function scangamepads() {
13. // function called 60 times/s
-14. // the gamepad is a "snapshot", so we need to set it
+14. // the gamepad is a "snapshot", so we need to set it
15. // 60 times / second in order to have an updated status
16. var gamepads = navigator.getGamepads();
17.
-18. for (var i = 0; i < gamepads.length; i++) {
+18. for (var i = 0; i < gamepads.length; i++) {
19. // current gamepad is not necessarily the first
20. if(gamepads[i] !== undefined)
21. gamepad = gamepads[i];
22. }
-23. }
-```
+23. }
-In this code, we check every 1/60 second for newly or re-connected
+
In this code, we check every 1/60 second for newly or re-connected
gamepads, and we update the gamepad global var with the first gamepad
object returned by the browser. We need to do this so that we have an
-accurate "snapshot" of the gamepad state, with fixed values for the
+accurate “snapshot” of the gamepad state, with fixed values for the
buttons, axes, etc. If we want to check the current button and joystick
statuses, we must poll the browser at a high frequency and call for an
-updated snapshot.
+updated snapshot.
-From the specification: "getGamepads retrieves a snapshot of the data
-for the currently connected and interacted with gamepads."
+
From the specification: “getGamepads retrieves a snapshot of the
+data for the currently connected and interacted with gamepads.”
-This code will be integrated (as well as the event listeners presented
-earlier) in the next JSBin examples.
+
This code will be integrated (as well as the event listeners presented earlier)
+in the next JSBin examples.
-To keep things simple, the above code works with a single gamepad
-- [here's a good example of managing multiple
-gamepads](https://github.com/luser/gamepadtest).
+
Detecting button status and axes values (joysticks)
-
Properties of the gamepad object
+
Properties of the gamepad object
-The gamepad object returned in the event listener [has different
-properties](https://w3c.github.io/gamepad/#gamepad-interface):
-
-- id: a string indicating the id of the gamepad. Useful with
- > the mapping property below.
-
-- index: an integer used to distinguish multiple controllers (gamepad
- > 1, gamepad 2, etc.).
-
-- connected: true if the controller is still connected, false if it
- > has been disconnected.
-
-- mapping: not implemented yet by most browsers. It will allow the
- > controllers of the gamepad to be remapped. A layout map is
- > associated with the id of the gamepad. By default, and before they
- > implement support for different mapping, all connected
- > gamepads [use a standard default
- > layout](https://w3c.github.io/gamepad/#remapping).
+
id: a string indicating the id of the gamepad. Useful with > the mapping property below.
+
index: an integer used to distinguish multiple controllers (gamepad > 1, gamepad 2, etc.).
+
connected: true if the controller is still connected, false if it > has been disconnected.
+
mapping: not implemented yet by most browsers. It will allow the > controllers of
+ the gamepad to be remapped. A layout map is > associated with the id of the gamepad. By
+ default, and before they > implement support for different mapping, all connected >
+ gamepads
+ use a standard default > layout.
+
-
-
-
+
-- axes: an array of floating point values containing the state of each
- > axis on the device. Usually these represent the analog sticks,
- > with a pair of axes giving the position of the stick in its X and
- > Y axes. Each axis is normalized to the range of -1.0...1.0, with
- > -1.0 representing the up or left-most position of the axis, and
- > 1.0 representing the down or right-most position of the axis.
-
-- buttons: an array
- > of [GamepadButton](https://w3c.github.io/gamepad/#idl-def-GamepadButton) objects
- > containing the state of each button on the device. Each
- > GamepadButton has a pressed and a value property.
-
-- The pressed property is a Boolean property indicating whether the
- > button is currently pressed (true) or unpressed (false).
-
-- The value property is a floating point value used to enable
- > representing analog buttons, such as the triggers, on many modern
- > gamepads. The values are normalized to the range 0.0...1.0, with
- > 0.0 representing a button that is not pressed, and 1.0
- > representing a button that is fully depressed.
+
+
axes: an array of floating point values containing the state of each > axis on the
+ device. Usually these represent the analog sticks, > with a pair of axes giving the
+ position of the stick in its X and > Y axes. Each axis is normalized to the range of
+ -1.0…1.0, with > -1.0 representing the up or left-most position of the axis, and >
+ 1.0 representing the down or right-most position of the axis.
+
buttons: an array > of GamepadButton objects > containing the state
+ of each button on the device. Each > GamepadButton has a pressed and a value property.
+
The pressed property is a Boolean property indicating whether the > button is currently
+ pressed (true) or unpressed (false).
+
The value property is a floating point value used to enable > representing analog buttons,
+ such as the triggers, on many modern > gamepads. The values are normalized to the range 0.0…1.0,
+ with > 0.0 representing a button that is not pressed, and 1.0 > representing a button that is
+ fully depressed.
+
Detecting whether a button is pressed
-Digital, on/off buttons evaluate to either one or zero (respectively).
-Whereas analog buttons will return a floating-point value between zero
-and one.
+
Digital, on/off buttons evaluate to either one or zero (respectively). Whereas analog
+buttons will return a floating-point value between zero and one.
-[Example on JSBin](https://jsbin.com/heriqej/edit). You might also give
-a look at at this demo that does the same thing but with multiple
-gamepads.
+
+Example on JSBin. You might also give a look at at this demo that does the same thing
+but with multiple gamepads.
-
-
-
-
+
+
-Code for checking if a button is pressed:
+
Code for checking if a button is pressed:
- Code extract!
+ JavaScript code!
-```
-1. function checkButtons(gamepad) {
-2. for (var i = 0; i < gamepad.buttons.length; i++) {
+
1. function checkButtons(gamepad) {
+2. for (var i = 0; i < gamepad.buttons.length; i++) {
3. // do nothing is the gamepad is not ok
4. if(gamepad === undefined) return;
5. if(!gamepad.connected) return;
@@ -8697,31 +8222,28 @@ Code for checking if a button is pressed:
7. var b = gamepad.buttons[i];
8.
9. if(b.pressed) {
-10. console.log("Button " + i + " is pressed.");
+10. console.log("Button " + i + " is pressed.");
11. if(b.value !== undefined)
-12. // analog trigger L2 or R2, value is a float in [0,
- 1]
-13. console.log("Its value:" + b.val);
+12. // analog trigger L2 or R2, value is a float in [0, 1]
+13. console.log("Its value:" + b.val);
14. }
15. }
-16. }
-```
+16. }
-In line 11, notice how we detect whether the current button is an
-analog trigger (L2 or R2 on Xbox360 or PS2/PS3 gamepads).
+
In line 11, notice how we detect whether the current button is
+an analog trigger (L2 or R2 on Xbox360 or PS2/PS3 gamepads).
-Next, we'll integrate it into the mainloop code. Note that we also need
-to call the scangamepads function from the loop, to generate fresh
-"snapshots" of the gamepad with updated properties. Without this call,
-the gamepad.buttons will return the same states every time.
+
Next, we’ll integrate it into the mainloop code. Note that we also
+need to call the scangamepads function from the loop, to generate fresh
+“snapshots” of the gamepad with updated properties. Without this call,
+the gamepad.buttons will return the same states every time.
1. // detect axis (joystick states)
2. function checkAxes(gamepad) {
3. if(gamepad === undefined) return;
4. if(!gamepad.connected) return;
5.
-6. for (var i=0; i
Detecting the direction (left, right, up, down, diagonals) and angle of the left joystick
-We could add an inputStates object similar to the one we used in the
+
We could add an inputStates object similar to the one we used in the
game framework, and check its values in the mainloop to decide whether
to move the player up/down/left/right, including diagonals - or maybe
-we'd prefer to use the current angle of the joystick. Here is how we
-manage this:
-
-[JSBin example](https://jsbin.com/vuxoqo/edit?js,output):
+we’d prefer to use the current angle of the joystick. Here is how we
+manage this:
-Hi (=dn reports): Have successfully installed a Logitech Attack
+
Hi (=dn reports): Have successfully installed a Logitech Attack
3 joy-stick on my Thinkpad running Linux Mint 17.1. It runs all of the
code presented here correctly, reporting 11 buttons and 3 axes (the
-knurled rotating knob (closest to my pen) is described as a 'throttle'
-or 'accelerator')).
+knurled rotating knob (closest to my pen) is described as a ‘throttle’
+or ‘accelerator’)).
-Traditionally Linux has been described as 'for work only' or 'no
-games', so it was a pleasant surprise to see how easy things were - no
-"driver" to install (it seems important to uninstall any existing
-connection between a device and the x-server), installed "joystick"
-testing and calibration tool, and the "jstest-gtk" configuration and
-testing tool; and that was 'it' - no actual configuration was
-necessary!
+
Traditionally Linux has been described as ‘for work only’ or ‘no
+games’, so it was a pleasant surprise to see how easy things were - no
+“driver” to install (it seems important to uninstall any existing
+connection between a device and the x-server), installed “joystick”
+testing and calibration tool, and the “jstest-gtk” configuration and
+testing tool; and that was ‘it’ - no actual configuration was
+necessary!
External resources
-- [THE BEST resource: this paper from smashingmagazine.com tells you
- everything about the GamePad
- API](https://www.smashingmagazine.com/2015/11/gamepad-api-in-web-games/).
- Very complete, explains how to set a dead zone, a keyboard
- fallback, etc.
-
-- [Good article about using the gamepad API on the Mozilla Developer
- Network site](https://hacks.mozilla.org/2013/12/the-gamepad-api/)
-
-- [An interesting article on the gamepad support, published on the
- HTML5 Rocks Web
- site](https://www.html5rocks.com/en/tutorials/doodles/gamepad/)
-
-- [gamepad.js ](https://github.com/neogeek/gamepad.js)is a Javascript
- library to enable the use of gamepads and joysticks in the
- browser. It smoothes over the differences between browsers,
- platforms, APIs, and a wide variety of gamepad/joystick devices.
-
-- [Another library we used in our team for controlling a mobile robot (good support from the
- authors)](https://github.com/kallaspriit/HTML5-JavaScript-Gamepad-Controller-Library)
-
-- [Gamepad Controls for HTML5 Games](https://blog.teamtreehouse.com/gamepad-controls-html5-games)
-
+
gamepad.js is a Javascript library to enable the use of gamepads and joysticks in the browser. It smoothes over the differences between browsers, platforms, APIs, and a wide variety of gamepad/joystick devices.
-Hi, This time, I will show you in that video what we have done so far.
+
Hi, This time, I will show you in that video what we have done so
+far.
-We started from one of the examples from the HTML5 Part 1 course, the one that just use the canvas to draw small monster.
+
We started from one of the examples from the HTML5 Part 1 course, the
+one that just use the canvas to draw small monster.
-We have a canvas here, and in a function called when the page is
-loaded in the window.onload callback, we get the canvas, we get the context of the canvas and we called the
-draw monster function.
+
We have a canvas here, and in a function called when the page is
+loaded in the window.onload callback, we get the canvas, we get the
+context of the canvas and we called the draw monster function.
-Here we have got just a function that is called once after the page is loaded and this function just draws the monster using translate, stroke, fillRect and so on.
+
Here we have got just a function that is called once after the page
+is loaded and this function just draws the monster using translate,
+stroke, fillRect and so on.
-Then, in order to turn this into a small game framework and in order to draw the monster 60 times per seconds, we introduced what is called the black box model for creating JavaScript objects.
+
Then, in order to turn this into a small game framework and in order
+to draw the monster 60 times per seconds, we introduced what is called
+the black box model for creating JavaScript objects.
-Instead of having functions, we have got objects.
+
Instead of having functions, we have got objects.
-We create an object that is the game framework:
+
We create an object that is the game framework:
-game = new GF() with capital letters, this is called a constructor function.
+
game = new GF() with capital letters, this is called a constructor
+function.
-In JavaScript, when you start a function with capital letters, it means
-that it is meant to be use with a new.
+
In JavaScript, when you start a function with capital letters, it
+means that it is meant to be use with a new.
-Then we call just some parts of the game framework that are exposed to an external user.
+
Then we call just some parts of the game framework that are exposed
+to an external user.
-Here we can call game.start but we can not call anything. And in this way to design object, the internals of
-the game framework that are exposed are located in an object we return at the end of the
-object.
+
Here we can call game.start but we can not call anything. And in this
+way to design object, the internals of the game framework that are
+exposed are located in an object we return at the end of the object.
-In the end of the game framework constructor function, we do return start, this is the name
-of the property we will able to use from the outside, and start here is the name of an internal function.
+
In the end of the game framework constructor function, we do return
+start, this is the name of the property we will able to use from the
+outside, and start here is the name of an internal function.
-So, game.start will call this start function inside the game framework.
-In this function we do the initialization so we will create a div for displaying the number of frame per seconds, we get the canvas, the context, and we call requestAnimationFrame(mainLoop) in order to
-start the animation.
+
So, game.start will call this start function inside the game
+framework. In this function we do the initialization so we will create a
+div for displaying the number of frame per seconds, we get the canvas,
+the context, and we call requestAnimationFrame(mainLoop) in order to
+start the animation.
-The mainLoop is a private function inside the game framework because it is not exposed,
-we do not return its name here so it is a sort of private function.
+
The mainLoop is a private function inside the game framework because
+it is not exposed, we do not return its name here so it is a sort of
+private function.
-And what do we do in the mainLoop? We clear the canvas, we draw the monster and we call
-again requestAnimationFrame(mainLoop).
+
And what do we do in the mainLoop? We clear the canvas, we draw the
+monster and we call again requestAnimationFrame(mainLoop).
-In addition, we also measure the frames per seconds taking into account the current time.
+
In addition, we also measure the frames per seconds taking into
+account the current time.
-In order to measure the number of FPS, we pass the time and we compute deltas in this function here, that is
-also a private function that has been presented previously in the course.
+
In order to measure the number of FPS, we pass the time and we
+compute deltas in this function here, that is also a private function
+that has been presented previously in the course.
-This is a very small skeleton and then we can build on that by adding new methods, by
-adding new properties.
+
This is a very small skeleton and then we can build on that by adding
+new methods, by adding new properties.
-The properties are the local variables that are usable only inside the game framework.
-In the current lesson, we are adding user interaction like detecting the mouse buttons
-that are pressed, the mouse position or the different keys that can be pressed.
+
The properties are the local variables that are usable only inside
+the game framework. In the current lesson, we are adding user
+interaction like detecting the mouse buttons that are pressed, the mouse
+position or the different keys that can be pressed.
-You can notice that the diagonal movements are very smooth because we can manage different key presses at the same time and also I can press a key and a button and the monster will move faster.
+
You can notice that the diagonal movements are very smooth because we
+can manage different key presses at the same time and also I can press a
+key and a button and the monster will move faster.
-Multiple events are managed using a global status variable, that we called inputStates,
-and that is checked 60 times per seconds from inside the mainLoop.
+
Multiple events are managed using a global status variable, that we
+called inputStates, and that is checked 60 times per seconds from inside
+the mainLoop.
-The event listeners, the key event listeners and the mouse event listeners will just add
-properties to this variable.
+
The event listeners, the key event listeners and the mouse event
+listeners will just add properties to this variable.
-Let me show you how it is done.
+
Let me show you how it is done.
-It is done in the start method that is called when you want to initialize the game framework, we declare the event listener and in case we have got a keydown and if this key is the left arrow for example, we
-set the left property to the inputStates to left.
+
It is done in the start method that is called when you want to
+initialize the game framework, we declare the event listener and in case
+we have got a keydown and if this key is the left arrow for example, we
+set the left property to the inputStates to left.
-And in the animation loop, we call a method called updateMonsterPosition that will check for this global inputStates and if the left key is pressed, we will display a message "left key pressed" and we will modify the speed of the monster.
+
And in the animation loop, we call a method called
+updateMonsterPosition that will check for this global inputStates and if
+the left key is pressed, we will display a message “left key pressed”
+and we will modify the speed of the monster.
-And this variable ‘speedX’ is taken into account to increment the x coordinate.
+
And this variable ‘speedX’ is taken into account to increment the x
+coordinate.
-And this is how we can get a smooth animation because this updateMonsterPosition is called 60 times per seconds. If I keep a key pressed, it is not important if the key is repeated, if I have got several keys pressed, because the status of the left key in the inputStates will be unchanged and my monster will keep moving on the left.
+
And this is how we can get a smooth animation because this
+updateMonsterPosition is called 60 times per seconds. If I keep a key
+pressed, it is not important if the key is repeated, if I have got
+several keys pressed, because the status of the left key in the
+inputStates will be unchanged and my monster will keep moving on the
+left.
-Take time to look at the code, read slowly the explanation in the page
-and I will meet you in the next video in which we will add enemies and obstacles and
-detect collisions.
+
Take time to look at the code, read slowly the explanation in the
+page and I will meet you in the next video in which we will add enemies
+and obstacles and detect collisions.
-
Make the monster move using the arrow keys, and to increase its speed by pressing a mouse button
+
Make the monster move using the arrow keys, and to increase its speed
+by pressing a mouse button
-To conclude this topic about events, we will use the arrow keys to move
-our favorite monster up/down/left/right, and make it accelerate when we
-press a mouse button while it is moving. Notice that pressing two keys
-at the same time will make it move diagonally.
+
To conclude this topic about events, we will use the arrow keys to
+move our favorite monster up/down/left/right, and make it accelerate
+when we press a mouse button while it is moving. Notice that pressing
+two keys at the same time will make it move diagonally.
-[Check this online example at JSBin](https://jsbin.com/yebufu/edit):
-we've changed very few lines of code from the previous evolution!
+
1. // The monster!
2. var monster = {
3. x:10,
4. y:10,
5. speed:1
-6. };
-```
+6. };
-Where monster.x and monster.y define the monster's
-current position and monster.speed corresponds to the number of pixels
-the monster will move between animation frames.
+
Where monster.x and monster.y define the monster’s current position and
+monster.speed corresponds to the number of pixels the monster will move between
+animation frames.
-Note: this is not the best way to animate objects in a game; we will
-look at a far better solution - "time based animation" - in another
-lesson.
+
Note: this is not the best way to animate objects in a game; we will
+look at a far better solution - “time based animation”
+- in another lesson.
1. var mainLoop = function(time){
2. // Main function, called each frame
3. measureFPS(time);
4.
@@ -9051,47 +8581,45 @@ lesson.
13.
14. // Calls the animation loop every 1/60th of second
15. requestAnimationFrame(mainLoop);
-16. };
-```
+16. };
-We moved all the parts that check the input states in
-the updateMonsterPosition() function:
+
We moved all the parts that check the input states in
+the updateMonsterPosition() function:
-- In this function, we added two properties to
- the monster object: speedX and speedY which will correspond to the
- number of pixels we will add to the x and y position of the monster
- at each new frame of animation.
-
-- We first set these to zero (line 2), then depending on the
- keyboard input states, we set them to a value equal
- to monster.speed or -monster.speed modified by the keys that are
- being pressed at the time (lines 4-20).
-
-- Finally, we add speedX and speedY pixels to the x and/or y position
- of the monster (lines 36 and 37).
-
-- When the function is called by the game loop,
- if speedX and/or speedY are non-zero they will change
- the x and y position of the monster in successive frames, making it
- move smoothly.
-
-- If a mouse button is pressed or released we set
- the monster.speed value to +5 or to +1. This will make the
- monster move faster when a mouse button is down, and return to its
- normal speed when no button is down.
+
+
In this function, we added two properties to the monster object: speedX
+ and speedY which will correspond to the number of pixels we will add to the
+ x and y position of the monster at each new frame of animation.
+
We first set these to zero (line 2), then depending on the keyboard
+ input states, we set them to a value equal to monster.speed or -monster.speed
+ modified by the keys that are being pressed at the time (lines 4-20).
+
Finally, we add speedX and speedY pixels to the x and/or y position of
+ the monster (lines 36 and 37).
+
When the function is called by the game loop, if speedX and/or speedY
+ are non-zero they will change the x and y position of the monster in successive
+ frames, making it move smoothly.
+
If a mouse button is pressed or released we set the monster.speed value to
+ +5 or to +1. This will make the monster move faster when a mouse button is down,
+ and return to its normal speed when no button is down.
+
-Notice that two arrow keys and a mouse button can be pressed down at the
-same time. In this situation, the monster will take a diagonal direction
-and accelerate. This is why it is important to keep all the input states
-up-to-date, and not to handle single events individually.
+
Notice that two arrow keys and a mouse button can be pressed down at
+the same time. In this situation, the monster will take a diagonal
+direction and accelerate. This is why it is important to keep all the
+input states up-to-date, and not to handle single events
+individually.
Gamepad enhancements
-Let's add the gamepad utility functions from the previous lesson (we
+
Let’s add the gamepad utility functions from the previous lesson (we
tidied them a bit too, removing the code for displaying the progress
bars, buttons, etc.), added a gamepad property to the game framework,
-and added one new call in the game loop for updating the gamepad status:
-
-[Check the result on JSBin:](https://jsbin.com/yidohe/edit)
-
-
-
-
1. var mainLoop = function(time){
2. //main function, called each frame
3. measureFPS(time);
4.
@@ -9176,207 +8699,201 @@ The new updated mainloop:
16.
17. // Call the animation loop every 1/60th of second
18. requestAnimationFrame(mainLoop);
-19. };
-```
+19. };
-And here is the updateGamePadStatus function (the inner function calls
-are to gamepad utility functions detailed in the previous lesson):
+
And here is the updateGamePadStatus function (the inner function
+calls are to gamepad utility functions detailed in the previous
+lesson):
1. function updateGamePadStatus() {
2. // get new snapshot of the gamepad properties
3. scangamepads();
4. // Check gamepad button states
5. checkButtons(gamepad);
6. // Check joysticks
7. checkAxes(gamepad);
-8. }
-```
+8. }
-The checkAxes function updates the left, right, up, down properties of
-the inputStates object we previously used with key events. Therefore,
+
The checkAxes function updates the left, right, up, down properties
+of the inputStates object we previously used with key events. Therefore,
without changing any code in the updatePlayerPosition function, the
-monster moves by joystick command!
-
+monster moves by joystick command!
-
2.4.1 Introduction
+
2.4 Time-based Animation
-Hi! This time I will talk to you about what is called time-based animation.
-
-Here, we have got a simple example of a bouncing rectangle and the animation is done
-in an animationLoop that is called 60 times per seconds using the requestAnimationFrame.
-We clear the rectangle that corresponds to the canvas area, and we add a fixed increment
-to the x position.
-
-If x is just bounces on the side, we just reverse the speed.
-
-The speed is a fixed value of 3px per frame. What is happening if we run this
-application on a low end-smartphone or a Raspberry Pi, a low end-computer with a not
-very powerful GPU?
-
-We can simulate what would happen by just adding a big loop here to slow down
-artificially the animation.
+
Hi! This time I will talk to you about what is called time-based animation.
-This is what I have done here. Here I count up to 70 millions in the loop.
+
Here, we have got a simple example of a bouncing rectangle and the animation is
+done in an animationLoop that is called 60 times per seconds using the
+requestAnimationFrame. We clear the rectangle that corresponds to the canvas area,
+and we add a fixed increment to the x position.
-This takes time and slows down the animation.
+
If x is just bounces on the side, we just reverse the speed.
-The rectangle is just moving 3px every frame, but as the the frame rate drops down
-a lot, the actual speed on the screen of the rectangle is much slower than what I
-have got here.
+
The speed is a fixed value of 3px per frame. What is happening if we run this
+application on a low end-smartphone or a Raspberry Pi, a low end-computer with a
+not very powerful GPU?
-And here I added also a loop so this is a normal speed.
+
We can simulate what would happen by just adding a big loop here to slow down
+artificially the animation.
-What we can do in order not to have the speed going down?
+
This is what I have done here. Here I count up to 70 millions in the loop.
-We will compute the time between two consecutive frames.
+
This takes time and slows down the animation.
-So here I am using the date object from JavaScript to get the current time and I
-compute a delta that is a difference between the current time and the time at the
-previous animation.
+
The rectangle is just moving 3px every frame, but as the the frame
+rate drops down a lot, the actual speed on the screen of the rectangle
+is much slower than what I have got here.
-So the delta is the number of milliseconds elapsed since the last animation.
+
And here I added also a loop so this is a normal speed.
-And then we will compute the distance instead of using 3px per frame, the number
-of pixels we move the rectangle will increase if the time between frames increases.
+
What we can do in order not to have the speed going down?
-So we have got a function that is explained in the course that will compute the
-distance taking into account the time elapsed since the last frame and the speed
-we want to achieve in pixels per seconds, not in pixels per frames, but in pixels
-per seconds.
+
We will compute the time between two consecutive frames.
-This time, we are adding the increment but it is adjusted at each frame of animation.
+
So here I am using the date object from JavaScript to get the current
+time and I compute a delta that is a difference between the current time
+and the time at the previous animation.
-Here is an example that I slow down and if I change the time that this will take
-you can see that it goes from smooth to a bit jerky… to really jerky, but the
-speed on the screen is the same.
+
So the delta is the number of milliseconds elapsed since the last
+animation.
-It takes the same time to bounce from on side to another.
+
And then we will compute the distance instead of using 3px per frame,
+the number of pixels we move the rectangle will increase if the time
+between frames increases.
-This is what is done in real games.
+
So we have got a function that is explained in the course that will
+compute the distance taking into account the time elapsed since the last
+frame and the speed we want to achieve in pixels per seconds, not in
+pixels per frames, but in pixels per seconds.
-I wanted to show you also another thing, instead of using the date object from
-JavaScript that gives the time in milliseconds, but as we are animating at 60
-times per seconds, having an high resolution timer that has a sub-millisecond
-accuracy is much better and was asked by game developers.
+
This time, we are adding the increment but it is adjusted at each
+frame of animation.
-The callback function that is called when you use requestAnimationFrame
-can have an extra parameter that we have not used until now.
+
Here is an example that I slow down and if I change the time that
+this will take you can see that it goes from smooth to a bit jerky… to
+really jerky, but the speed on the screen is the same.
-So you can add a parameter here that will be a high resolution time.
+
It takes the same time to bounce from on side to another.
-And then you compute the delta using this time that is automatically
-passed to you by the requestAnimationFrame API.
+
This is what is done in real games.
-Here, I added this technique to the game framework.
+
I wanted to show you also another thing, instead of using the date
+object from JavaScript that gives the time in milliseconds, but as we
+are animating at 60 times per seconds, having an high resolution timer
+that has a sub-millisecond accuracy is much better and was asked by game
+developers.
-So, I am in the mainLoop from the last example we used with the game
-framework for moving the small monster using the keys.
+
The callback function that is called when you use
+requestAnimationFrame can have an extra parameter that we have not used
+until now.
-I artificially slowed down the animation. So if I do this, I have got 8 frames per
-seconds, and you can see that the monster moves, but it jumps with larger steps
-between two consecutive frames.
+
So you can add a parameter here that will be a high resolution time.
-If I remove this artificial part of the code that slows down the animation, I go
-up to 60 FPS and I have got a smooth animation.
+
And then you compute the delta using this time that is automatically
+passed to you by the requestAnimationFrame API.
-That means that this example will be usable on many different devices as
-the animation will be adapted depending on the power of the device.
+
Here, I added this technique to the game framework.
-So this is time-based animation.
+
So, I am in the mainLoop from the last example we used with the game
+framework for moving the small monster using the keys.
-Let's study an important technique known as "time-based animation",
-that is used by nearly all "real" video games.
+
I artificially slowed down the animation. So if I do this, I have got
+8 frames per seconds, and you can see that the monster moves, but it
+jumps with larger steps between two consecutive frames.
-This technique is useful when:
+
If I remove this artificial part of the code that slows down the
+animation, I go up to 60 FPS and I have got a smooth animation.
-- Your application runs on different devices, and where 60 frames/s
- are definitely not possible.More generally, you want your
- animated objects to move at the same speed on screen, regardless of
- the device that runs the game.
-
- For example, imagine a game or an animation running on a smartphone
- and on a desktop computer with a powerful GPU. On the phone, you
- might achieve a maximum of 20 fps with no guarantee that this number
- will be constant; whereas on the desktop, you will reliably achieve
- 60 fps. If the application is a car racing game, for example, your
- car will take 30s to make a complete loop on the race track when
- running on a desktop, whilst on a smartphone it might take 5
- minutes.
-
- The way to address this is to run at a lower frame-rate on the
- phone. This will enable the car to race around the track in the same
- amount of (real) time as it does on a powerful desktop computer.
-
- Solution: you need to compute the amount of time that has
- elapsed between the last frame that was drawn and the current one;
- and depending on this delta of time, adjust the distance the car
- must move across the screen. We will see several examples of this
- later.
+
That means that this example will be usable on many different devices
+as the animation will be adapted depending on the power of the
+device.
-- You want to perform some animations only a few times per
- second. For example, in sprite-based animation (drawing different
- images as a character moves, for example), you will not change the
- images of the animation 60 times/s, but only ten times per second.
- Mario will walk on the screen in a 60 fps animation, but his posture
- will not change every 1/60th of second.
+
So this is time-based animation.
-- You may also want to accurately set the framerate, leaving some
- CPU time for other tasks. Many games consoles limit
- the frame-rate to 1/30th of a second, in order to allow time for
- other sorts of computations (physics engine, artificial
- intelligence, etc.)
+
Let’s study an important technique known as “time-based
+animation”, that is used by nearly all “real” video games.
-
How to measure time when we use requestAnimationFrame?
+
This technique is useful when:
-Let's take a simple example with a small rectangle that moves from left
-to right. At each animation loop, we erase the canvas content, calculate
-the rectangle's new position, draw the rectangle, and call the animation
-loop again. So you animate a shape as follows (note: steps 2 and 3 can
-be swapped):
-
-1. erase the canvas,
+
+
Your application runs on different devices, and where 60 frames/s are
+ definitely not possible.More generally, you want your animated objects
+ to move at the same speed on screen, regardless of the device that runs the
+ game.
For example, imagine a game or an animation running on a
+ smartphone and on a desktop computer with a powerful GPU. On the phone, you
+ might achieve a maximum of 20 fps with no guarantee that this number will be
+ constant; whereas on the desktop, you will reliably achieve 60 fps. If the
+ application is a car racing game, for example, your car will take 30s to make
+ a complete loop on the race track when running on a desktop, whilst on a
+ smartphone it might take 5 minutes.
+
The way to address this is to run at a lower frame-rate on the phone. This
+ will enable the car to race around the track in the same amount of (real) time
+ as it does on a powerful desktop computer.
+
Solution: you need to compute the amount of time that has elapsed
+ between the last frame that was drawn and the current one; and depending on
+ this delta of time, adjust the distance the car must move across the screen.
+ We will see several examples of this later.
+
You want to perform some animations only a few times per second.
+ For example, in sprite-based animation (drawing different images as a character
+ moves, for example), you will not change the images of the animation 60 times/s,
+ but only ten times per second. Mario will walk on the screen in a 60 fps
+ animation, but his posture will not change every 1/60th of second.
+
You may also want to accurately set the framerate, leaving some
+ CPU time for other tasks. Many games consoles limit the frame-rate to
+ 1/30th of a second, in order to allow time for other sorts of computations
+ (physics engine, artificial intelligence, etc.)
+
-2. draw the shapes,
+
How to measure time when we use requestAnimationFrame?
-3. move the shapes,
+
Let’s take a simple example with a small rectangle that moves from
+left to right. At each animation loop, we erase the canvas content,
+calculate the rectangle’s new position, draw the rectangle, and call the
+animation loop again. So you animate a shape as follows (note: steps 2
+and 3 can be swapped):
-4. go to step 1.
+
+
erase the canvas,
+
draw the shapes,
+
move the shapes,
+
go to step 1.
+
-When we use requestAnimationFrame for implementing such an animation, as
-we did in the previous lessons, the browser tries to keep the frame-rate
-at 60 fps, meaning that the ideal time between frames will be 1/60
-second = 16.66 ms.
+
When we use requestAnimationFrame for implementing such an animation,
+as we did in the previous lessons, the browser tries to keep the
+frame-rate at 60 fps, meaning that the ideal time between frames will be
+1/60 second = 16.66 ms.
Example #1: no use of time-based animation
-[Online example at JSBin](https://jsbin.com/dibuze/edit)
-
+
- Code extract!
-
-```
-1.
-2.
-3.
-4.
-5. Small animation example
-6.
-52.
+51. </script>
+52. </head>
53.
-54.
-55.
-57.
-58.
-```
+54. <body onload="init();">
+55. <canvas id="mycanvas" width="200" height="50" style="border: 2px solid black">
+56. </canvas>
+57. </body>
+58. </html>
-If you try this example on a low-end smartphone (use
-this [URL](https://jsbin.com/dibuze) for the example in stand-alone
+
If you try this example on a low-end smartphone (use this URL for the example in stand-alone
mode) and if you run it at the same time on a desktop PC, it is obvious
that the rectangle moves faster on the desktop computer screen than on
-your phone.
+your phone.
-This is because the frame rate differs between the computer and the
+
This is because the frame rate differs between the computer and the
smartphone: perhaps 60 fps on the computer and 25 fps on the phone. As
we only move the rectangle in the animationLoop, in one second the
rectangle will be moved 25 times on the smartphone compared with 60
times on the computer! Since we move the rectangle the same number of
-pixels each time, the rectangle moves faster on the computer!
+pixels each time, the rectangle moves faster on the computer!
Example #2: simulating a low-end device
-Here is the same example to which we have added a loop that wastes time
-right in the middle of the animation loop. It will artificially extend
-the time spent inside the animation loop, making the 1/60th of second
-ideal impossible to reach.
+
Here is the same example to which we have added a loop that wastes
+time right in the middle of the animation loop. It will artificially
+extend the time spent inside the animation loop, making the 1/60th of
+second ideal impossible to reach.
-[Try it on JsBin](https://jsbin.com/remide/edit) and notice that the
-square moves much slower on the screen. Indeed, its speed is a direct
-consequence of the extra time spent in the animation loop.
+
Try it on JsBin and
+notice that the square moves much slower on the screen. Indeed, its
+speed is a direct consequence of the extra time spent in the animation
+loop.
-```
-1. function animationLoop() {
+
1. function animationLoop() {
2. ...
-3. for(var i = 0; i < 50000000; i++) {
+3. for(var i = 0; i < 50000000; i++) {
4. // slow down artificially the animation
5. }
6. ...
7. requestAnimationFrame(animationLoop);
-8. }
-```
-
+8. }
-
2.4.2 Measuring time between frames
+
2.4.2 Measuring time between frames
-Let's find out how to measuring time between frames to achieve a
-constant speed on screen, even when the frame rate changes.
+
Let’s find out how to measuring time between frames to achieve a
+constant speed on screen, even when the frame rate changes.
Method #1: using the JavaScript Date object
-Let's modify the example from the previous lesson slightly by adding
-a time-based animation. Here we use the "standard JavaScript" way for
-measuring time, using JavaScript's Date object:
+
Let’s modify the example from the previous lesson slightly by adding
+a time-based animation. Here we use the “standard JavaScript”
+way for measuring time, using JavaScript’s Date object:
-```
-var time = new Date().getTime();
-```
+
1. var time = new Date().getTime();
-The getTime() method returns the number of milliseconds since midnight
-on January 1, 1970. This is the number of milliseconds that have elapsed
-during the Unix epoch (!).
+
The getTime() method returns the number of milliseconds since
+midnight on January 1, 1970. This is the number of milliseconds that
+have elapsed during the Unix epoch (!).
+
There is an alternative. We could have called:
-There is an alternative. We could have called:
+
1. var time = Date.now();
-```
-var time = Date.now();
-```
+
So, if we measure the time at the beginning of each animation loop,
+and store it, we can then compute the delta of times elapsed between two
+consecutive loops.
-So, if we measure the time at the beginning of each animation loop, and
-store it, we can then compute the delta of times elapsed between two
-consecutive loops.
+
We then apply some simple math to compute the number of pixels we
+need to move the shape to achieve a given speed (in pixels/s).
-We then apply some simple math to compute the number of pixels we need
-to move the shape to achieve a given speed (in pixels/s).
-
-
Example #1: using time based animation: the bouncing square
-
-[Online example at JSBin](https://jsbin.com/riferi/edit):
+
Example #1: using time based animation: the bouncing square
- Code extract!
-
-```
-1.
-2.
-3.
-4.
-5. Move rectangle using time based animation
-6.
-82.
+81. </script>
+82. </head>
83.
-84.
-85.
-86.
-87.
-```
+84. <body onload="init();">
+85. <canvas id="mycanvas" width="200" height="50" style="border: 2px solid > black"></canvas>
+86. </body>
+87. </html>
-In this example, we only added a few lines of code for measuring the
-time and computing the time elapsed between two consecutive frames
-(see line 38). Normally, requestAnimationFrame(callback) tries to call
-the callback function every 16.66 ms (this corresponds to 60
-frames/s)... but this is never exactly the case. If you do
-a console.log(delta)in the animation loop, you will see that even on a
-very powerful computer, the delta is "very close" to 16.6666 ms, but 99%
-of the time it will be slightly different.
+
In this example, we only added a few lines of code for measuring the time and computing the time
+elapsed between two consecutive frames (see line 38). Normally, requestAnimationFrame(callback)
+tries to call the callback function every 16.66 ms (this corresponds to 60 frames/s)… but this is never
+exactly the case. If you do a console.log(delta)in the animation loop, you will see that even on a
+very powerful computer, the delta is “very close” to 16.6666 ms, but 99% of the time it will be slightly
+different.
-The function calcDistanceToMove(delta, speed) takes two parameters: 1)
-the time elapsed in ms, and 2) the target speed in pixels/s.
+
The function calcDistanceToMove(delta, speed) takes two parameters: 1) the time elapsed in ms, and
+2) the target speed in pixels/s.
-Try this example on a smartphone, use
-this [link](https://jsbin.com/jeribi) to run the JSBin example in
-stand-alone mode. Normally you should see no difference in speed, but it
-may look a bit jerky on a low-end smartphone or on a slow
-computer. This is the correct behavior.
+
Try this example on a smartphone, use this
+link to run the JSBin example in
+stand-alone mode. Normally you should see no difference in speed, but it may look a bit jerky on a low-end
+smartphone or on a slow computer. This is the correct behavior.
-Or you can try the next example that simulates a complex animation loop
-that takes a long time to draw each frame...
+
Or you can try the next example that simulates a complex animation loop that takes a
+long time to draw each frame…
-
Example #2: using a simulation that spends a lot of time in the animation loop, to compare with the previous example
-
-[Try it on JsBin](https://jsbin.com/jeribi/edit):
+
Example #2: using a simulation that spends a lot of time in the animation loop, to
+compare with the previous example
-We added a long loop in the middle of the animation loop. This time, the
-animation should be very jerky. However, notice that the apparent speed
-of the square is the same as in the previous example: the animation
-adapts itself!
+
We added a long loop in the middle of the animation loop. This time, the animation
+should be very jerky. However, notice that the apparent speed of the square is the same
+as in the previous example: the animation adapts itself!
1. function animationLoop() {
2. // Measure time
3. now = new Date().getTime();
4.
@@ -9661,7 +9166,7 @@ adapts itself!
15. // clear canvas
16. ctx.clearRect(0, 0, width, height);
17.
-18. for(var i = 0; i < 50000000; i++) {
+18. for(var i = 0; i < 50000000; i++) {
19. // just to slow down the animation
20. }
21.
@@ -9671,7 +9176,7 @@ adapts itself!
25. x += incX;
26.
27. // check collision on left or right
-28. if((x+10 >= width) || (x <= 0)) {
+28. if((x+10 >= width) || (x <= 0)) {
29. // cancel move + inverse speed
30. x -= incX;
31. speedX = -speedX;
@@ -9681,67 +9186,53 @@ adapts itself!
35. then = now;
36.
37. requestAnimationFrame(animationLoop);
-38. }
-```
+38. }
Method #2: using the new HTML5 high-resolution timer
-Since the beginning of HTML5, game developers, musicians, and
-others have asked for a sub-millisecond timer to be able to avoid some
-glitches that occur with the regular JavaScript timer. This API is
-called the "[High Resolution Time API](https://www.w3.org/TR/hr-time/)".
+
Since the beginning of HTML5, game developers, musicians, and others have asked for a
+sub-millisecond timer to be able to avoid some glitches that occur with the regular JavaScript
+timer. This API is called the “
+High Resolution Time API”.
-This API is very simple to use - just do:
+
This API is very simple to use - just do:
-```
-var time = performance.now();
-```
+
1. var time = performance.now();
-... to get a sub-millisecond time-stamp. It is similar to Date.now()
-except that the accuracy is much higher and that the result is not
-exactly the same. The value returned is a floating point number, not an
-integer value!
+
… to get a sub-millisecond time-stamp. It is similar to Date.now() except that the accuracy is much higher and
+that the result is not exactly the same. The value returned is a floating point number, not an integer value!
-From [this article that explains the High Resolution Time
-API](https://www.sitepoint.com/discovering-the-high-resolution-time-api/):
- "The only method exposed is now(), which returns a DOMHighResTimeStamp
-representing the current time in milliseconds. The timestamp is very
-accurate, with precision to a thousandth of a millisecond. Please note
-that while Date.now() returns the number of milliseconds elapsed since 1
-January 1970 00:00:00 UTC, performance.now() returns the number of
-milliseconds, with microseconds in the fractional part, from
-performance.timing.navigationStart(), the start of navigation of the
-document, to the performance.now() call. Another important difference
-between Date.now() and performance.now() is that the latter is
-monotonically increasing, so the difference between two calls will never
-be negative."
+
From
+this article that explains the High Resolution Time API: “The only method exposed is now(), which returns a DOMHighResTimeStamp
+representing the current time in milliseconds. The timestamp is very accurate, with precision to a thousandth of a millisecond. Please
+note that while Date.now() returns the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC, performance.now() returns the
+number of milliseconds, with microseconds in the fractional part, from performance.timing.navigationStart(), the start of navigation of
+the document, to the performance.now() call. Another important difference between Date.now() and performance.now() is that the latter is
+monotonically increasing, so the difference between two calls will never be negative.”
-To sum up:
+
To sum up:
-- performance.now() returns the time since the load of the document
- (it is called a DOMHighResTimeStamp), with a sub mill-second
- accuracy, as a floating point value, with very high accuracy.
-
-- Date.now() returns the number of mill-seconds since the Unix epoch,
- as an integer value.
+
+
performance.now() returns the time since the load of the document (it is called a DOMHighResTimeStamp), with a sub mill-second accuracy,
+ as a floating point value, with very high accuracy.
+
Date.now() returns the number of mill-seconds since the Unix epoch, as an integer value.
+
-Support for this API is quite good - see the compatibility
-table [online](https://caniuse.com/#feat=high-resolution-time).
+
Support for this API is quite good - see the compatibility table
+online.
-Here is [a version on JSBin of the previous example with the bouncing
-rectangle, that uses the high resolution
-timer](https://jsbin.com/wecaho/edit).
+
- Code extract!
+ JS code!
-```
-1. ...
-2.
-```
+54. </script>
-Only two lines have changed but the accuracy is much higher, if you
-uncomment the console.log(...) calls in the main loop. You will see the
-difference.
+
Only two lines have changed but the accuracy is much higher, if you
+uncomment the console.log(…) calls in the main loop. You will see the
+difference.
-
Method #3: using the optional timestamp parameter of the callback function of requestAnimationFrame
+
Method #3: using the optional timestamp parameter of the callback
+function of requestAnimationFrame
-> This is the recommended method!
+
+
This is the recommended method!
+
-There is an optional parameter that is passed to the callback function
-called by requestAnimationFrame: a timestamp!
+
There is an optional parameter that is passed to the callback
+function called by requestAnimationFrame: a timestamp!
-[The requestAnimationFrame API
-specification](https://www.w3.org/TR/animation-timing/) says that this
-timestamp corresponds to the time elapsed since the page has been
-loaded.
+
- Source code!
-
-```
-
-
-
-
-Time based animation using the parameter of the requestAnimationFrame callback
-
-
+ JavaScript code!
-
-
-
-
-```
+
1. <!DOCTYPE html>
+2. <html lang="en">
+3. <head>
+4. <meta charset=utf-8 />
+5. <title>Time based animation using the parameter of the requestAnimationFrame callback</title>
+6. <script>
+7. var canvas, ctx;
+8. var width, height;
+9. var x, y, incX; // incX is the distance from the previously drawn rectangle
+10. // to the new one
+11. var speedX; // speedX is the target speed of the rectangle in pixels/s
+12.
+13. // for time based animation
+14. var now, delta=0;
+15. // High resolution timer
+16. var oldTime = 0;
+17.
+18. // Called after the DOM is ready (page loaded)
+19. function init() {
+20. // init the different variables
+21. canvas = document.querySelector("#mycanvas");
+22. ctx = canvas.getContext('2d');
+23. width = canvas.width;
+24. height = canvas.height;
+25.
+26. x=10; y = 10;
+27. // Target speed in pixels/second, try with high values, 1000, 2000...
+28. speedX = 200;
+29.
+30. // Start animation
+31. requestAnimationFrame(animationLoop);
+32. }
+33.
+34. function animationLoop(currentTime) {
+35. // How long between the current frame and the previous one?
+36. delta = currentTime - oldTime;
+37.
+38. // Compute the displacement in x (in pixels) in function of the time elapsed and
+39. // in function of the wanted speed
+40. incX = calcDistanceToMove(delta, speedX);
+41.
+42. // clear canvas
+43. ctx.clearRect(0, 0, width, height);
+44.
+45. ctx.strokeRect(x, y, 10, 10);
+46.
+47. // move rectangle
+48. x += incX;
+49.
+50. // check collision on left or right
+51. if(((x+10) > width) || (x < 0)) {
+52. // inverse speed
+53. x -= incX;
+54. speedX = -speedX;
+55. }
+56.
+57. // Store time
+58. oldTime = currentTime;
+59.
+60. // asks for next frame
+61. requestAnimationFrame(animationLoop);
+62. }
+63.
+64. var calcDistanceToMove = function(delta, speed) {
+65. return (speed * delta) / 1000;
+66. }
+67.
+68. </script>
+69. </head>
+70.
+71. <body onload="init();">
+72. <canvas id="mycanvas" width="200" height="50" style="border: 2px solid black"></canvas>
+73. </body>
+74. </html>
+75.
-
-
2.4.3 Setting the frame rate
+
2.4.3 Setting the frame rate
-Principle: even if the mainloop is called 60 times per second, ignore
-some frames in order to reach the desired frame rate.
-
-It is also possible to set the frame rate using time based animation. We
-can set a global variable that corresponds to the desired frame rate and
-compare the elapsed time between two executions of the animation loop:
-
-- If the time elapsed is too short for the target frame rate: do
- nothing,
-
-- If the time elapsed exceeds the delay corresponding to the chosen
- frame rate: draw the frame and reset this time to zero.
+
Principle: even if the mainloop is called 60 times per second, ignore
+some frames in order to reach the desired frame rate.
-Here is the [online example at JSBin](https://jsbin.com/bonutur/edit).
+
It is also possible to set the frame rate using time based animation.
+We can set a global variable that corresponds to the desired frame rate
+and compare the elapsed time between two executions of the animation
+loop:
-Try to change the parameter value of the call to:
+
+
If the time elapsed is too short for the target frame rate: do nothing,
+
If the time elapsed exceeds the delay corresponding to the chosen frame rate: draw the frame and reset this time to zero.
+
-```
-setFrameRateInFramesPerSecond(5); // try other values!
-```
+
1. <sup>setFrameRateInFramesPerSecond(5); // try other values!</sup>
-
-
-
-
+
+
-
Source code of the example:
+
Source code:
- Code extract!
-
-```
-
-
-
-
- Set framerate using a high resolution timer
-
-
-
This example measures and sums deltas of time between
- consecutive frames of animation. It includes
- a setFrameRateInFramesPerSecond function you can
- use to reduce the number of frames per second of the main
- animation.
-
-
-
-
-
-```
-
+ HTML & JS code!
+
+
1. <!DOCTYPE html>
+2. <html lang="en">
+3. <head>
+4. <meta charset=utf-8 />
+5. <title>Set framerate using a high resolution timer</title>
+6. </head>
+7. <body>
+8. <p>This example measures and sums deltas of time between
+9. consecutive frames of animation. It includes
+10. a <code>setFrameRateInFramesPerSecond</code> function you can
+11. use to reduce the number of frames per second of the main
+12. animation.</p>
+13.
+14. <canvas id="myCanvas" width="700" height="350">
+15. </canvas>
+16. <script>
+17. var canvas = document.querySelector("#myCanvas");
+18. var ctx = canvas.getContext("2d");
+19. var width = canvas.width, height = canvas.height;
+20. var lastX = width * Math.random();
+21. var lastY = height * Math.random();
+22. var hue = 0;
+23.
+24. // Michel Buffa: set the target frame rate. TRY TO CHANGE THIS VALUE AND SEE
+25. // THE RESULT. Try 2 frames/s, 10 frames/s, 60 frames/s Normally there
+26. // should be a limit of 60 frames/s in the browser's implementations.
+27. setFrameRateInFramesPerSecond(60);
+28.
+29. // for time based animation. DelayInMS corresponds to the target framerate
+30. var now, delta, delayInMS, totalTimeSinceLastRedraw = 0;
+31.
+32. // High resolution timer
+33. var then = performance.now();
+34.
+35. // start the animation
+36. requestAnimationFrame(mainloop);
+37.
+38. function setFrameRateInFramesPerSecond(frameRate) {
+39. delayInMs = 1000 / frameRate;
+40. }
+41.
+42. // each function that is going to be run as an animation should end by
+43. // asking again for a new frame of animation
+44. function mainloop(time) {
+45. // Here we will only redraw something if the time we want between frames has
+46. // elapsed
+47. // Measure time with high resolution timer
+48. now = time;
+49.
+50. // How long between the current frame and the previous one?
+51. delta = now - then;
+52. // TRY TO UNCOMMENT THIS LINE AND LOOK AT THE CONSOLE
+53. // console.log("delay = " + delayInMs + " delta = " + delta + " total time = " +
+54. // totalTimeSinceLastRedraw);
+55.
+56. // If the total time since the last redraw is > delay corresponding to the wanted
+57. // framerate, then redraw, else add the delta time between the last call to line()
+58. // by requestAnimFrame to the total time..
+59. if (totalTimeSinceLastRedraw > delayInMs) {
+60. // if the time between the last frame and now is > delay then we
+61. // clear the canvas and redraw
+62.
+63. ctx.save();
+64.
+65. // Trick to make a blur effect: instead of clearing the canvas
+66. // we draw a rectangle with a transparent color. Changing the 0.1
+67. // for a smaller value will increase the blur...
+68. ctx.fillStyle = "rgba(0,0,0,0.1)";
+69. ctx.fillRect(0, 0, width, height);
+70.
+71. ctx.translate(width / 2, height / 2);
+72. ctx.scale(0.9, 0.9);
+73. ctx.translate(-width / 2, -height / 2);
+74.
+75. ctx.beginPath();
+76. ctx.lineWidth = 5 + Math.random() * 10;
+77. ctx.moveTo(lastX, lastY);
+78. lastX = width * Math.random();
+79. lastY = height * Math.random();
+80.
+81. ctx.bezierCurveTo(width * Math.random(),
+82. height * Math.random(),
+83. width * Math.random(),
+84. height * Math.random(),
+85. lastX, lastY);
+86.
+87. hue = hue + 10 * Math.random();
+88. ctx.strokeStyle = "hsl(" + hue + ", 50%, 50%)";
+89. ctx.shadowColor = "white";
+90. ctx.shadowBlur = 10;
+91. ctx.stroke();
+92.
+93. ctx.restore();
+94.
+95. // reset the total time since last redraw
+96. totalTimeSinceLastRedraw = 0;
+97. } else {
+98. // sum the total time since last redraw
+99. totalTimeSinceLastRedraw += delta;
+100. }
+101.
+102. // Store time
+103. then = now;
+104.
+105. // request new frame
+106. requestAnimationFrame(mainloop);
+107. }
+108. </script>
+109. </body>
+110. </html>
+111.
+
-
Same technique with the bouncing rectangle
+
Same technique with the bouncing rectangle
-See how we can set both the speed (in pixels/s) and the frame-rate using
-a high-resolution time with this [modified version on JSBin of the
-example with the rectangle that also uses this
-technique](https://jsbin.com/momeci/edit).
+
-It's quite possible to use setInterval(function, interval) if you do not
-need an accurate scheduling.
+
It’s quite possible to use setInterval(function, interval) if you do
+not need an accurate scheduling.
-To animate a monster at 60 fps but blinking his eyes once per second,
+
To animate a monster at 60 fps but blinking his eyes once per second,
you would use a mainloop with requestAnimationFrame and target a 60 fps
animation, but you would also have a call to setInterval(changeEyeColor,
1000); and the changeEyeColor function will update a global
variable, eyeColor, every second, which will be taken into account
-within the drawMonster function, called 60 times/s from the mainloop.
-
+within the drawMonster function, called 60 times/s from the
+mainloop.
-
2.4.4 Adding time-based animation
+
2.4.4 Adding time-based animation
-To add time-based animation to our game engine, we will be using the
+
To add time-based animation to our game engine, we will be using the
technique discussed in the previous lesson. This technique is now widely
supported by browsers, and adds time-based animation to our game
framework, through the timestamp parameter passed to the callback
-function (mainLoop) by the call to requestAnimationFrame(mainLoop).
+function (mainLoop) by the call to requestAnimationFrame(mainLoop).
-[Here is an online example of the game framework at
-JSBin](https://jsbin.com/xacebu/edit): this time, the monster has a
-speed in pixels/s and we use time-based animation. Try it and verify the
+
Here is an online example of
+the game framework at JSBin: this time, the monster has a speed in
+pixels/s and we use time-based animation. Try it and verify the
smoothness of the animation; the FPS counter on a Mac Book Pro core i7
-shows 60 fps.
-
-
-
-
-
-
-[Now try this slightly modified
-version](https://jsbin.com/gazatuquya/edit?html,js,output) in which we
-added a delay inside the animation loop. This should slow down the frame
-rate. On a Mac Book Pro + core i7, the frame-rate drops down to 37 fps.
-However, if you move the monster using the arrow keys, its speed on the
-screen is the same, excepting that it's not as smooth as in the previous
-version, which ran at 60 fps.
-
-
-
-
+
+
+
+
+
Now try
+this slightly modified version in which we added a delay inside the
+animation loop. This should slow down the frame rate. On a Mac Book Pro
++ core i7, the frame-rate drops down to 37 fps. However, if you move the
+monster using the arrow keys, its speed on the screen is the same,
+excepting that it’s not as smooth as in the previous version, which ran
+at 60 fps.
+
+
+
-
-Here are the parts we changed:
+
Here are the parts we changed:
-- Declaration of the monster object - now the speed is in pixels/s
- instead of in pixels per frame
+
+
Declaration of the monster object - now the speed is in pixels/s instead of in pixels per frame
+
-```
-1. // The monster !
+
1. // The monster !
2. var monster = {
3. x:10,
4. y:10,
5. speed:100, // pixels/s this time !
-6. };
-```
+6. };
-- We added a timer(currentTime) function that returns the delta of the
- time elapsed since its last call
+
+
We added a timer(currentTime) function that returns the delta of the time elapsed since its last call
+
-We refer to it from the game loop, to measure the time between frames.
-Notice that here we pass the delta as a parameter to
-the updateMonsterPosition call:
+
We refer to it from the game loop, to measure the time between frames. Notice that here we pass the delta as a parameter to the updateMonsterPosition call:
1. function timer(currentTime) {
2. var delta = currentTime - oldTime;
3. oldTime = currentTime;
4. return delta;
@@ -10273,131 +9757,123 @@ the updateMonsterPosition call:
22.
23. // call the animation loop every 1/60th of second
24. requestAnimationFrame(mainLoop);
-25. };
-```
+25. };
-- Finally, we use the time-delta in
- the updateMonsterPosition(...) function
+
+
Finally, we use the time-delta in the updateMonsterPosition(…) function
+
-```
-1. function updateMonsterPosition(delta) {
+
1. function updateMonsterPosition(delta) {
2. ...
3. // Compute the incX and inY in pixels depending
4. // on the time elapsed since last redraw
5. monster.x += calcDistanceToMove(delta, monster.speedX);
6. monster.y += calcDistanceToMove(delta, monster.speedY);
-7. }
-```
-
+7. }
-
2.5.1 Animating multiple objects
+
2.5 Animating Multiple Objects
-Hi, welcome! Let me show you how we can add many animated objects to the
-game framework.
+
Hi, welcome! Let me show you how we can add many animated objects to the game
+framework.
-You can imagine them as being the enemies the player should
-fight or whatever. For the sake of this example, we are using black balls,
-but you can imagine small images or small monsters or whatever.
+
You can imagine them as being the enemies the player should fight or
+whatever. For the sake of this example, we are using black balls, but
+you can imagine small images or small monsters or whatever.
-Using here a constructor function is interesting because we can design a
-sort of class.
+
Using here a constructor function is interesting because we can
+design a sort of class.
-I mean a bit like Java classes or C# classes, if we make a comparison
+
I mean a bit like Java classes or C# classes, if we make a comparison
with other objected oriented languages. So we just say that the ball has
a x position and a y position, an angle, a speed (the ‘v’ here if for he
-speed), and a radius.
-
-We also said that each ball will be able to move and to be drawn on the
-screen.
-
-It is a way to encapsulate in one single function the properties and the
-methods for manipulating the balls. And the advantage is that now we can
-create many balls using the new operator and passing different
-parameters.
+speed), and a radius.
-Let's have a look at a function that will build a certain amount of
-balls with different parameters.
+
We also said that each ball will be able to move and to be drawn on
+the screen.
-We called it createBalls: it takes as parameters a number of
-balls and will, in a loop, create new balls.
+
It is a way to encapsulate in one single function the properties and
+the methods for manipulating the balls. And the advantage is that now we
+can create many balls using the new operator and passing different
+parameters.
-The new ball here will create a ball with a random x position, a random y position, a random
+
Let’s have a look at a function that will build a certain amount of
+balls with different parameters. We called it createBalls: it takes as parameters a number
+of balls and will, in a loop, create new balls. The new ball here will
+create a ball with a random x position, a random y position, a random
angle between 0 and 2*PI, and a random speed and a given size. So I can
change the size here, I can use another size so the reduce is a fixe
-parameter here.
+parameter here. Every ball is added to an array, so we have got a
+variable called ballArray that contains all the balls.
-Every ball is added to an array, so we have got a variable called ballArray that contains all the balls.
+
At the initialization, when the page is loaded we call this
+createBalls function that will fill the ball array. The mainLoop is
+called 60 times per seconds and goes along the whole ballArray and for
+each ball in the array, we will call ‘move’ that will change the x and y
+position depending on the angle and the speed of the ball and we will
+draw the balls.
-At the initialization, when the page is loaded we call this createBalls
-function that will fill the ball array. The mainLoop is called 60 times
-per seconds and goes along the whole ballArray and for each ball in the
-array, we will call ‘move’ that will change the x and y position
-depending on the angle and the speed of the ball and we will draw the
-balls.
+
In the middle we test if the ball is colliding with a side and we
+change the angle.
-In the middle we test if the ball is colliding with a side and we change
-the angle.
-
-I can just call createBalls with a large number: here I created 100
+
I can just call createBalls with a large number: here I created 100
balls, 1000 balls, and I can change some of their properties. This small
example, that is 50 lines of code long, we can just take the function
-and add them inside the game framework.
+and add them inside the game framework.
-So here, what we did is that we just added the ballArray variable inside
-the game framework, we called createBalls in the start function of the
-game framework, start function, I am inside here, so I create 160 balls
-or 1 ball so I can do whatever I like.
+
So here, what we did is that we just added the ballArray variable
+inside the game framework, we called createBalls in the start function
+of the game framework, start function, I am inside here, so I create 160
+balls or 1 ball so I can do whatever I like.
-And we call the draw ball and move ball from inside the animation loop.
+
And we call the draw ball and move ball from inside the animation
+loop.
-If you look at the animation loop, we clear the canvas, we draw the
+
If you look at the animation loop, we clear the canvas, we draw the
monster and we update the monster position to take into account the
keys. If I press some keys the monster moves. So we call updateBalls
-that will move all the balls and draw all the balls.
+that will move all the balls and draw all the balls.
-What we have is that we got the last example with the moving monster
-plus a set of enemies that are animated.
+
What we have is that we got the last example with the moving monster
+plus a set of enemies that are animated.
-In this section, we will see how we can animate and control not only the
-player but also other objects on the screen.
+
In this section, we will see how we can animate and control not only
+the player but also other objects on the screen.
-Let's study a simple example: animating a few balls and detecting
+
Let’s study a simple example: animating a few balls and detecting
collisions with the surrounding walls. For the sake of simplicity, we
-will not use time-based animation in the first examples.
+will not use time-based animation in the first examples.
Animating multiple balls which bounce off horizontal and vertical walls
-[Online example at JSBin](https://jsbin.com/fikomik/edit?js,output):
-
-
-
-
-
-
-
-In this example, we define a constructor function for creating balls.
-This is a way to design JavaScript "pseudo classes" as found in other
-object-oriented languages like Java, C# etc. It's useful when you plan
-to create many objects of the same class. Using this we could animate
-hundreds of balls on the screen.
-
-Each ball has an x and y position, and in this example, instead of
-working with angles, we defined two "speeds" - horizontal and vertical
+
In this example, we define a constructor function for
+creating balls. This is a way to design JavaScript “pseudo classes” as
+found in other object-oriented languages like Java, C# etc. It’s useful
+when you plan to create many objects of the same class. Using this we
+could animate hundreds of balls on the screen.
+
Each ball has an x and y position, and in this example, instead of
+working with angles, we defined two “speeds” - horizontal and vertical
speeds - in the form of the increments we will add to
the x and y positions at each frame of animation. We also added a
-variable for adjusting the size of the balls: the radius.
-
-Here is the constructor function for building balls:
+variable for adjusting the size of the balls: the radius.
+
Here is the constructor function for building balls:
- Code extract!
+ JavaScript code!
-```
-1. // Constructor function for balls
+
1. // Constructor function for balls
2. function Ball(x, y, vx, vy, diameter) {
3. // property of each ball: a x and y position, speeds, radius
4. this.x = x;
@@ -10419,47 +9895,40 @@ Here is the constructor function for building balls:
20. this.x += this.vx;
21. this.y += this.vy;
22. };
-23. }
-```
+23. }
-Using a constructor function makes it easy to build new balls as
-follows:
+
Using a constructor function makes it easy to build new balls as
+follows:
-```
-1. var b1 = new Ball(10, 10, 2, 2, 5); // x, y, vx, vy, radius
+
1. var b1 = new Ball(10, 10, 2, 2, 5); // x, y, vx, vy, radius
2. var b1 = new Ball(100, 130, 4, 5, 5);
-3. etc...
-```
+3. etc...
-We defined two methods in the constructor function for moving the ball
-and for drawing the ball as a black filled circle. Here is the syntax
-for moving and drawing a ball:
+
We defined two methods in the constructor function for moving the
+ball and for drawing the ball as a black filled circle. Here is the
+syntax for moving and drawing a ball:
-```
-1. b1.draw();
-2. b1.move();
-```
+
1. b1.draw();
+2. b1.move();
-We will call these methods from inside the mainLoop, and as you'll see,
-we will create many balls. This object-oriented design makes it easier
-to handle large quantities.
-
-Here is the rest of the code from this example:
+
We will call these methods from inside the mainLoop, and as you’ll
+see, we will create many balls. This object-oriented design makes it
+easier to handle large quantities.
1. var canvas, ctx, width, height;
2.
3. // array of balls to animate
4. var ballArray = [];
5.
6. function init() {
-7. canvas = document.querySelector("#myCanvas");
-8. ctx = canvas.getContext('2d');
+7. canvas = document.querySelector("#myCanvas");
+8. ctx = canvas.getContext('2d');
9. width = canvas.width;
10. height = canvas.height;
11.
@@ -10470,7 +9939,7 @@ Here is the rest of the code from this example:
16. }
17.
18. function createBalls(numberOfBalls) {
-19. for(var i=0; i < numberOfBalls; i++) {
+19. for(var i=0; i < numberOfBalls; i++) {
20.
21. // Create a ball with random position and speed.
22. // You can change the radius
@@ -10490,7 +9959,7 @@ Here is the rest of the code from this example:
36. ctx.clearRect(0, 0, width, height);
37.
38. // for each ball in the array
-39. for(var i=0; i < ballArray.length; i++) {
+39. for(var i=0; i < ballArray.length; i++) {
40. var ball = ballArray[i];
41.
42. // 1) move the ball
@@ -10508,66 +9977,51 @@ Here is the rest of the code from this example:
54.
55. function testCollisionWithWalls(ball) {
56. // left
-57. if (ball.x < ball.radius) { // x and y of the ball are at the center of the circle
+57. if (ball.x < ball.radius) { // x and y of the ball are at the center of the circle
58. ball.x = ball.radius; // if collision, we replace the ball at a position
-59. ball.vx *= -1; // where it's exactly in contact with the left border
+59. ball.vx *= -1; // where it's exactly in contact with the left border
60. } // and we reverse the horizontal speed
61. // right
-62. if (ball.x > width - (ball.radius)) {
+62. if (ball.x > width - (ball.radius)) {
63. ball.x = width - (ball.radius);
64. ball.vx *= -1;
65. }
66. // up
-67. if (ball.y < ball.radius) {
+67. if (ball.y < ball.radius) {
68. ball.y = ball.radius;
69. ball.vy *= -1;
70. }
71. // down
-72. if (ball.y > height - (ball.radius)) {
+72. if (ball.y > height - (ball.radius)) {
73. ball.y = height - (ball.radius);
74. ball.vy *= -1;
75. }
-76. }
-```
+76. }
-Notice that:
+
Notice that:
-- All the balls are stored in an array (line 4),
-
-- We wrote a createBalls(nb) function that creates a given number of
- balls (and stores them in the array) with random values for position
- and speed (lines 18-32)
-
-- In the mainLoop, we iterate on the array of balls and for each ball
- we: 1) move it, 2) test if it collides with the boundaries of the
- canvas (in the function testCollisionWithWalls), and 3) we draw the
- balls (lines 38-50). The order of these steps is not critical and
- may be changed.
-
-- The function that tests collisions is straightforward (lines
- 55-76). We did not use "if... else if" since a ball may sometimes
- touch two walls at once (in the corners). In that rare case, we need
- to invert both the horizontal and vertical speeds. When a ball
- collides with a wall, we need to replace it in a position where it
- is no longer against the wall (otherwise it will collide again
- during the next animation loop execution).
+
+
All the balls are stored in an array (line 4),
+
We wrote a createBalls(nb) function that creates a given number of balls (and stores them in the array) with random values for position and speed (lines 18-32)
+
In the mainLoop, we iterate on the array of balls and for each ball we: 1) move it, 2) test if it collides with the boundaries of the canvas (in the function testCollisionWithWalls), and 3) we draw the balls (lines 38-50). The order of these steps is not critical
+ and may be changed.
+
The function that tests collisions is straightforward (lines 55-76). We did not use “if… else if” since a ball may sometimes touch two walls at once (in the corners). In that rare case, we need to invert both the horizontal and vertical speeds. When a ball collides with a wall, we need to replace it in a position where it is no longer against the wall (otherwise it will collide again during the next animation loop execution).
+
Similar example but with the ball direction as an angle, and a single velocity variable
-[Try this example at JSBin](https://jsbin.com/begaci/edit): it behaves
-in the same way as the previous example.
-
-Note that we just changed the way we designed the balls and computed the
-angles after they rebound from the walls. The changes are highlighted in
-bold:
+
Note that we just changed the way we designed the balls and computed
+the angles after they rebound from the walls. The changes are
+highlighted in bold:
1. var canvas, ctx, width, height;
2.
3. // Array of balls to animate
4. var ballArray = [];
@@ -10577,7 +10031,7 @@ bold:
8. }
9.
10. function createBalls(numberOfBalls) {
-11. for(var i=0; i < numberOfBalls; i++) {
+11. for(var i=0; i < numberOfBalls; i++) {
12.
13. // Create a ball with random position and speed.
14. // You can change the radius
@@ -10598,22 +10052,22 @@ bold:
29.
30. function testCollisionWithWalls(ball) {
31. // left
-32. if (ball.x < ball.radius) {
+32. if (ball.x < ball.radius) {
33. ball.x = ball.radius;
34. ball.angle = -ball.angle + Math.PI;
35. }
36. // right
-37. if (ball.x > width - (ball.radius)) {
+37. if (ball.x > width - (ball.radius)) {
38. ball.x = width - (ball.radius);
39. ball.angle = -ball.angle + Math.PI;
40. }
41. // up
-42. if (ball.y < ball.radius) {
+42. if (ball.y < ball.radius) {
43. ball.y = ball.radius;
44. ball.angle = -ball.angle;
45. }
46. // down
-47. if (ball.y > height - (ball.radius)) {
+47. if (ball.y > height - (ball.radius)) {
48. ball.y = height - (ball.radius);
49. ball.angle =-ball.angle;
50. }
@@ -10638,55 +10092,51 @@ bold:
69. this.x += this.v * Math.cos(this.angle);
70. this.y += this.v * Math.sin(this.angle);
71. };
-72. }
-```
+72. }
-Using angles or horizontal and vertical increments is equivalent.
+
Using angles or horizontal and vertical increments is equivalent.
However, one method might be preferable to the other: for example, to
control an object that follows the mouse, or that tracks another object
in order to attack it, angles would be more practical input to the
-computations required.
-
-
-
2.5.2 Adding balls to the game framework
-
-This time, let's extract the source code used to create the balls, and
-include it in our game framework. We are also going to use time-based
-animation. The distance that the player and each ball should move is
-computed and may vary between animation frames, depending on the
-time-delta since the previous frame.
-
-[Online example at JSBin](https://jsbin.com/tehuve/edit).
-
-Try to move the monster with arrow keys and use the mouse button while
-moving to change the monster's speed. Look at the source code and change
-the parameters controlling the creation of the balls: number, speed,
-radius, etc. Also, try changing the monster's default speed. See the
-results.
-
-
-
-
-
+
+
2.5.2 Adding balls to the game framework
+
+
This time, let’s extract the source code used to create the balls,
+and include it in our game framework. We are also going to use
+time-based animation. The distance that the player and each ball should
+move is computed and may vary between animation frames, depending on
+the time-delta since the previous frame.
Try to move the monster with arrow keys and use the mouse button
+while moving to change the monster’s speed. Look at the source code and
+change the parameters controlling the creation of the balls: number,
+speed, radius, etc. Also, try changing the monster’s default speed. See
+the results.
+
+
+
-
-For this version, we copied and pasted some code from the previous
+
For this version, we copied and pasted some code from the previous
example and we also modified the mainLoop to make it more readable. In a
next lesson, we will split the game engine into different files and
clean the code-base to make it more manageable. But for the moment,
-jsbin.com is a good playground to try-out and test things...
+jsbin.com is a good playground to try-out and test things…
1. var mainLoop = function(time){
2. //main function, called each frame
3. measureFPS(time);
4.
@@ -10707,20 +10157,18 @@ jsbin.com is a good playground to try-out and test things...
19.
20. // Call the animation loop every 1/60th of second
21. requestAnimationFrame(mainLoop);
-22. };
-```
+22. };
-As you can see, we draw the player/monster, we update its position; and
-we call an updateBalls function to do the same for the balls: draw and
-update their position.
+
As you can see, we draw the player/monster, we update its position;
+and we call an updateBalls function to do the same for the balls: draw
+and update their position.
1. function updateMonsterPosition(delta) {
2. monster.speedX = monster.speedY = 0;
3. // check inputStates
4. if (inputStates.left) {
@@ -10739,7 +10187,7 @@ update their position.
17.
18. function updateBalls(delta) {
19. // for each ball in the array
-20. for(var i=0; i < ballArray.length; i++) {
+20. for(var i=0; i < ballArray.length; i++) {
21. var ball = ballArray[i];
22.
23. // 1) move the ball
@@ -10751,330 +10199,331 @@ update their position.
29. // 3) draw the ball
30. ball.draw();
31. }
-32. }
-```
+32. }
-Now, in order to turn this into a game, we need to create some
+
Now, in order to turn this into a game, we need to create some
interactions between the player (the monster) and the obstacles/enemies
-(balls, walls)... It's time to take a look at collision detection.
-
+(balls, walls)… It’s time to take a look at collision detection.
-
2.5.3 Collision detection
+
2.5.3 Collision detection
-In this chapter, we explore some techniques for detecting collisions
-between objects. This includes moving and static objects. We first
-present three "classic" collision tests, and follow them with brief
-sketches of more complex algorithms.
+
In this chapter, we explore some techniques for detecting collisions
+between objects. This includes moving and static objects. We first
+present three “classic” collision tests, and follow them with brief
+sketches of more complex algorithms.
Circle collision test
-
-
-
-
+
-
-
-Collision between circles is easy. Imagine there are two circles:
-
-1. Circle c1 with center (x1,y1) and radius r1;
-2. Circle c2 with center (x2,y2) and radius r2.
-
-Imagine there is a line running between those two center points. The
+
Collision between circles is easy. Imagine there are two circles:
+
+
Circle c1 with center (x1,y1) and radius r1;
+
Circle c2 with center (x2,y2) and radius r2.
+
+
Imagine there is a line running between those two center points. The
distances from the center points to the edge of each circle is, by
-definition, equal to their respective radii. So:
-
-- if the edges of the circles touch, the distance between the centers
- is r1+r2;
+definition, equal to their respective radii. So:
-- any greater distance and the circles don't touch or collide; whereas
-
-- any less and they do collide or overlay.
+
+
if the edges of the circles touch, the distance between the centers is r1+r2;
+
any greater distance and the circles don’t touch or collide; whereas
+
any less and they do collide or overlay.
+
-> In other words: if the distance between the center points is less than
-> the sum of the radii, then the circles collide.
+
+
In other words: if the distance between the center points is less
+than the sum of the radii, then the circles collide.
+
-Let's implement this as a JavaScript function step-by-step:
+
Let’s implement this as a JavaScript function step-by-step:
- Code extract!
+ JavaScript code!
-```
-1. function circleCollideNonOptimised(x1, y1, r1, x2, y2, r2) {
-2. var dx = x1 - x2;
-3. var dy = y1 - y2;
-4. var distance = Math.sqrt(dx * dx + dy * dy);
+
1. function circleCollideNonOptimised(x1, y1, r1, x2, y2, r2) {
+2. var dx = x1 - x2;
+3. var dy = y1 - y2;
+4. var distance = Math.sqrt(dx * dx + dy * dy);
+<!-- -->
457.
-5. return (distance < r1 + r2);
-6. }
-```
+<!-- -->
+5. return (distance < r1 + r2);
+6. }
-This could be optimized a little averting the need to compute a square
-root:
+
This could be optimized a little averting the need to compute a
+square root:
-```
-(x2-x1)^2 + (y1-y2)^2 <= (r1+r2)^2
-```
+
1. (x2-x1)^2 + (y1-y2)^2 <= (r1+r2)^2
Which yields:
-```
-1. function circleCollide(x1, y1, r1, x2, y2, r2) {
-2. var dx = x1 - x2;
-3. var dy = y1 - y2;
-4. return ((dx * dx + dy * dy) < (r1 + r2)*(r1+r2));
-5. }
-```
+
1. function circleCollide(x1, y1, r1, x2, y2, r2) {
+2. var dx = x1 - x2;
+3. var dy = y1 - y2;
+4. return ((dx * dx + dy * dy) < (r1 + r2)*(r1+r2));
+5. }
-This technique is attractive because a "bounding circle" can often be
+
This technique is attractive because a “bounding circle” can often be
used with graphic objects of other shapes, providing they are not too
-elongated horizontally or vertically.
+elongated horizontally or vertically.
-
Let's test this idea
-
-[Try this example at JSBin](https://jsbin.com/ciyiko/edit): move the
-monster with the arrow keys and use the mouse to move "the player": a
-small circle. Try to make collisions between the monster and the circle
-you control.
+
Let’s test this idea
+
Try this example at
+JSBin: move the monster with the arrow keys and use the mouse to
+move “the player”: a small circle. Try to make collisions between the
+monster and the circle you control.
-
-
-
+
-
-This online example uses the game framework (without time-based
-animation in this one). We just added a "player" (for the moment, a
-circle that follows the mouse cursor), and a "monster". We created two
+
This online example uses the game framework (without time-based
+animation in this one). We just added a “player” (for the moment, a
+circle that follows the mouse cursor), and a “monster”. We created two
JavaScript objects for describing the monster and the player, and these
-objects both have a boundingCircleRadius property:
+objects both have a boundingCircleRadius property:
- JavaScript code extract!
-
-```
-var mainLoop = function(time){
-//main function, called each frame
-measureFPS(time);
-// Clear the canvas
-clearCanvas();
-// Draw the monster
- drawMyMonster();
-// Check inputs and move the monster
-updateMonsterPosition();
-updatePlayer();
-checkCollisions();
-// Call the animation loop every 1/60th of second
-requestAnimationFrame(mainLoop);
-};
-function updatePlayer() {
-// The player is just a circle drawn at the mouse position
-// Just to test circle/circle collision.
-if(inputStates.mousePos) { // Move the player and draw it as a circle
- player.x = inputStates.mousePos.x; // when the mouse moves
- player.y = inputStates.mousePos.y;
- ctx.beginPath();
- ctx.arc(player.x, player.y, player.boundingCircleRadius, 0, 2*Math.PI);
- ctx.stroke();
- }
-}
-function checkCollisions() {
-if(circleCollide(player.x, player.y, player.boundingCircleRadius,
- monster.x, monster.y, monster.boundingCircleRadius)) {
- // Draw everything in red
- ctx.fillText("Collision", 150, 20);
- ctx.strokeStyle = ctx.fillStyle = 'red';
- } else {
-// Draw in black
- ctx.fillText("No collision", 150, 20);
-
- ctx.strokeStyle = ctx.fillStyle = 'black';
- }
-}
-function circleCollide(x1, y1, r1, x2, y2, r2) {
- var dx = x1 - x2;
- var dy = y1 - y2;
- return ((dx * dx + dy * dy) < (r1 + r2)*(r1+r2));
-}
-```
+ JavaScript code!
+
+
1. var mainLoop = function(time){
+2. //main function, called each frame
+3. measureFPS(time);
+4. // Clear the canvas
+5. clearCanvas();
+6. // Draw the monster
+7. drawMyMonster();
+8. // Check inputs and move the monster
+9. updateMonsterPosition();
+10. updatePlayer();
+11. checkCollisions();
+12. // Call the animation loop every 1/60th of second
+13. requestAnimationFrame(mainLoop);
+14. };
+15. function updatePlayer() {
+16. // The player is just a circle drawn at the mouse position
+17. // Just to test circle/circle collision.
+18. if(inputStates.mousePos) { // Move the player and draw it as a circle
+19. player.x = inputStates.mousePos.x; // when the mouse moves
+20. player.y = inputStates.mousePos.y;
+21. ctx.beginPath();
+22. ctx.arc(player.x, player.y, player.boundingCircleRadius, 0, 2*Math.PI);
+23. ctx.stroke();
+24. }
+25. }
+26. function checkCollisions() {
+27. if(circleCollide(player.x, player.y, player.boundingCircleRadius,
+28. monster.x, monster.y, monster.boundingCircleRadius)) {
+29. // Draw everything in red
+30. ctx.fillText("Collision", 150, 20);
+31. ctx.strokeStyle = ctx.fillStyle = 'red';
+32. } else {
+33. // Draw in black
+34. ctx.fillText("No collision", 150, 20);
+35.
+36. ctx.strokeStyle = ctx.fillStyle = 'black';
+37. }
+38. }
+39. function circleCollide(x1, y1, r1, x2, y2, r2) {
+40. var dx = x1 - x2;
+41. var dy = y1 - y2;
+42. return ((dx * dx + dy * dy) < (r1 + r2)*(r1+r2));
+43. }
[Advanced technique] Use several bounding circles for complex shapes, recompute bounding circles when the shape changes over time (animated objects)
-This is an advanced technique: you can use a list of bounding circles or
-better still, a hierarchy of bounding circles in order to reduce the
-number of tests. The image below of an "arm" can be associated with a
-hierarchy of bounding circles. First, test against the "big one" on the
+
This is an advanced technique: you can use a list of bounding circles
+or better still, a hierarchy of bounding circles in order to reduce the
+number of tests. The image below of an “arm” can be associated with a
+hierarchy of bounding circles. First, test against the “big one” on the
left that contains the whole arm, then if there is a collision, test for
-the two sub-circles, etc... this recursive algorithm will not be covered
-in this course, but it's a classic optimization.
-
+the two sub-circles, etc… this recursive algorithm will not be covered
+in this course, but it’s a classic optimization.
-
-
-
+
-
-
-In 3D, you can use spheres instead of circles:
+
In 3D, you can use spheres instead of circles:
-
-
-
+
-
-
-The famous game Gran Turismo 4 on the PlayStation 2 uses bounding
-spheres for detecting collisions between cars:
+
The famous game Gran Turismo 4 on the PlayStation 2 uses bounding
+spheres for detecting collisions between cars:
-
-
-
+
+
-
Rectangle (aligned along X and Y axis) detection test
-Let's look at a simple illustration:
-
+
Let’s look at a simple illustration:
-
-
-
-
+
+
From this:
-> To detect a collision between two aligned rectangles, we project the
-> horizontal and vertical axis of the rectangles over the X and Y axis.
-> If both projections overlap, there is a collision!
-
-
[<span id="_Toc98696616" class="anchor"></span>Try this online demonstration of rectangle - rectangle detection](https://silentmatt.com/rectangle-intersection/)
+
+
To detect a collision between two aligned rectangles, we project the
+horizontal and vertical axis of the rectangles over the X and Y axis. If
+both projections overlap, there is a collision!
+
-1 - Only horizontal axis projections overlap: no collision between
-rectangles
+
3 - Horizontal and vertical axis projections overlap: collision detected!
-
-
-
+
-
-
-Here is a JavaScript implementation of a rectangle - rectangle (aligned)
-collision test:
-
-
- JavaScript code extract!
-
-```
-// Collisions between aligned rectangles
-function rectsOverlap(x1, y1, w1, h1, x2, y2, w2, h2) {
-
- if ((x1 > (x2 + w2)) || ((x1 + w1) < x2))
- return false; // No horizontal axis projection overlap
-
- if ((y1 > (y2 + h2)) || ((y1 + h1) < y2))
- return false; // No vertical axis projection overlap
-
- return true; // If previous tests failed, then both axis projections
- // overlap and the rectangles intersect
-}
-```
-
-
-
-
Let's test this method
-[Try this example at JSBin](https://jsbin.com/fubima/edit): move the
-monster with the arrow keys and use the mouse to move "the player": this
-time a small rectangle. Try to make collisions between the monster and
-the circle you control. Notice that this time the collision detection is
-more accurate and can work with elongated shapes.
-
-
-
-Here is a JavaScript implementation of a rectangle - rectangle
+(aligned) collision test:
+
+
1. // Collisions between aligned rectangles
+2. function rectsOverlap(x1, y1, w1, h1, x2, y2, w2, h2) {
+3.
+4. if ((x1 > (x2 + w2)) || ((x1 + w1) < x2))
+5 return false; // No horizontal axis projection overlap
+6.
+7. if ((y1 > (y2 + h2)) || ((y1 + h1) < y2))
+8. return false; // No vertical axis projection overlap
+9.
+10. return true; // If previous tests failed, then both axis projections
+11. // overlap and the rectangles intersect
+12. }
+
+
Let’s test this method
+
+
Try this example at
+JSBin: move the monster with the arrow keys and use the mouse to
+move “the player”: this time a small rectangle. Try to make collisions
+between the monster and the circle you control. Notice that this time
+the collision detection is more accurate and can work with elongated
+shapes.
+
+
+
+
+
+
-
-
-
-
+ title="Same as previous picture but this time the player square is inside
+ the monster bounding rectangle: collision detected"
+ alt="Same as previous picture but this time the player square is inside
+ the monster bounding rectangle: collision detected." />
-Here is what we modified:
+
1. ...
2. // The monster!
3. var monster = {
4. x: 80,
@@ -11100,6 +10549,7 @@ Here is what we modified:
24. player.x = inputStates.mousePos.x;
25. player.y = inputStates.mousePos.y;
26.
+<!-- -->
535. 1. // draws a rectangle centered on the mouse position
2. // we draw it as a square.
3. // We remove size/2 to the x and y position at drawing time in
@@ -11107,234 +10557,227 @@ Here is what we modified:
5. // the 0, 0 of a rectangle is at its top left corner)
6. var size = player.boundingCircleRadius;
7. ctx.fillRect(player.x - size / 2, player.y - size / 2, size, size);
+<!-- -->
27. }
28. }
+<!-- -->
536.
+<!-- -->
29. function checkCollisions() {
30. // Bounding rect position and size for the player. We need to
translate
-31. // it to half the player's size
+31. // it to half the player's size
32. var playerSize = player.boundingCircleRadius;
33. var playerXBoundingRect = player.x - playerSize / 2;
34. var playerYBoundingRect = player.y - playerSize / 2;
35. // Same with the monster bounding rect
36. var monsterXBoundingRect = monster.x - monster.width / 2;
37. var monsterYBoundingRect = monster.y - monster.height / 2;
+<!-- -->
537.
+<!-- -->
38. if (rectsOverlap(playerXBoundingRect, playerYBoundingRect,
playerSize, playerSize,
monsterXBoundingRect, monsterYBoundingRect,
monster.width, monster.height)) {
-
- ctx.fillText("Collision", 150, 20);
- ctx.strokeStyle = ctx.fillStyle = 'red';
+ <!-- -->
+ ctx.fillText("Collision", 150, 20);
+ ctx.strokeStyle = ctx.fillStyle = 'red';
39. } else {
- ctx.fillText("No collision", 150, 20);
- ctx.strokeStyle = ctx.fillStyle = 'black';
+ ctx.fillText("No collision", 150, 20);
+ ctx.strokeStyle = ctx.fillStyle = 'black';
40. }
41. }
+<!-- -->
538.
+<!-- -->
42. // Collisions between aligned rectangles
43. function rectsOverlap(x1, y1, w1, h1, x2, y2, w2, h2) {
+<!-- -->
539.
-44. if ((x1 > (x2 + w2)) || ((x1 + w1) < x2))
+<!-- -->
+44. if ((x1 > (x2 + w2)) || ((x1 + w1) < x2))
return false; // No horizontal axis projection overlap
+<!-- -->
540.
-45. if ((y1 > (y2 + h2)) || ((y1 + h1) < y2))
+<!-- -->
+45. if ((y1 > (y2 + h2)) || ((y1 + h1) < y2))
return false; // No vertical axis projection overlap
+<!-- -->
541.
+<!-- -->
46. return true; // If previous tests failed, then both axis
projections
// overlap and the rectangles intersect
-47. }
-```
+47. }
Many real games use aligned rectangle collision tests
-Testing "circle-circle" or "rectangle-rectangle collisions is cheap in
-terms of computation. "Rectangle-rectangle" collisions are used in many
-2D games, such as Dodonpachi (one of the most famous and enjoyable
-shoot'em'ups ever made - you can play it using the MAME arcade game
-emulator):
-
-
-
-Testing “circle-circle” or “rectangle-rectangle collisions
+is cheap in terms of computation.”Rectangle-rectangle”
+collisions are used in many 2D games, such as Dodonpachi (one of the
+most famous and enjoyable shoot’em’ups ever made - you can play it using
+the MAME arcade game emulator):
+
+
+
-
-
-You could also try the free Genetos shoot'em up game (Windows only) that
-retraces the history of the genre over its different levels ([download
-here](https://tatsuya-koyama.com/works/games/genetos/)). Press the G key
-to see the bounding rectangles used for collision test. Here is a
-screenshot:
-
-
-
-
-You could also try the free Genetos shoot’em up game (Windows
+only) that retraces the history of the genre over its different levels
+(download
+here). Press the G key to see the bounding rectangles used for
+collision test. Here is a screenshot:
+
+
+
-
-
-These games run at 60 fps and can have hundreds of bullets moving at the
-same time. Collisions have to be tested: did the player's bullets hit an
-enemy, AND did an enemy bullet (for one of the many enemies) hit the
-player? These examples demonstrate the efficiency of such collision test
-techniques.
+
+
These games run at 60 fps and can have hundreds of bullets moving at
+the same time. Collisions have to be tested: did the player’s bullets
+hit an enemy, AND did an enemy bullet (for one of the many enemies) hit
+the player? These examples demonstrate the efficiency of such collision
+test techniques.
Other collision tests
-In this section, we only give sketches and examples of more
+
In this section, we only give sketches and examples of more
sophisticated collision tests. For further explanation, please follow
-the links provided.
+the links provided.
Aligned rectangle-circle
-There are only two cases when a circle intersects with a rectangle:
+
There are only two cases when a circle intersects with
+a rectangle:
-1. Either the circle's center lies inside the rectangle, or
-
-2. One of the edges of the rectangle intersects the circle.
+
+
Either the circle’s center lies inside the rectangle, or
+
One of the edges of the rectangle intersects the circle.
+
-We propose this function (implemented after reading [this Thread at
-StackOverflow](https://stackoverflow.com/questions/401847/circle-rectangle-collision-detection-intersection)):
+
-- Math and physics: please read [this external resource (for math), a
- great article that explains the physics of a pool
- game](https://web.archive.org/web/20181231090226/http:/archive.ncsa.illinois.edu/Classes/MATH198/townsend/math.html).
-
-- [Example of colliding balls at JSBin (author:
- M.Buffa)](https://jsbin.com/juzefa/edit), and also [try this example
- that does the same with a blurring
- effect](https://jsbin.com/nopefe/edit)
-
+
-
-The principle behind collision resolution for pool balls is as follows.
-You have a situation where two balls are colliding, and you know their
-velocities (step 1 in the diagram below). You separate out each ball’s
-velocity (the solid blue and green arrows in step 1, below) into two
-perpendicular components: the "normal" component heading towards the
-other ball (the dotted blue and green arrows in step 2) and the
-"tangential" component that is perpendicular to the other ball (the
-dashed blue and green arrows in step 2). We use "normal" for the first
+
+
+
+
The principle behind collision resolution for pool balls is as
+follows. You have a situation where two balls are colliding, and you
+know their velocities (step 1 in the diagram below). You separate out
+each ball’s velocity (the solid blue and green arrows in step 1, below)
+into two perpendicular components: the “normal” component heading
+towards the other ball (the dotted blue and green arrows in step 2) and
+the “tangential” component that is perpendicular to the other ball (the
+dashed blue and green arrows in step 2). We use “normal” for the first
component as its direction is along the line that links the centers of
the balls, and this line is perpendicular to the collision plane (the
-plane that is tangent to the two balls at collision point).
-
-The solution for computing the resulting velocities is to swap the
+plane that is tangent to the two balls at collision point).
+
The solution for computing the resulting velocities is to swap the
components between the two balls (as we move from step 2 to step 3),
then finally recombine the velocities for each ball to achieve the
-result (step 4):
+result (step 4):
-
-
-
-
+
+
-The above picture has been borrowed from [this interesting article about
-how to implement in C# pool like collision
-detection](https://sinepost.wordpress.com/2012/09/05/making-your-balls-bounce/).
+
Of course, we will only compute these steps if the balls collide, and for that test we will have used the basic circle collision test outlined
+earlier.
-Of course, we will only compute these steps if the balls collide, and
-for that test we will have used the basic circle collision test outlined
-earlier.
-
-To illustrate the algorithm, [here is an example at JSBin that displays
-the different vectors in real time, with only two
-balls](https://jsbin.com/vuqeti/6/edit). The math for the collision test
-have also been expanded in the source code to make computations clearer.
-Note that this is not for beginners: advanced math and physics are
-involved!
-
-
To go further... video game physics!
-
-For the ones who are not afraid by some math and physics and would like
-to learn how to do collision detection in a more realistic way (using
-physics modeling), we recommend [this tutorial, that is the first of a
-three-part series about video game
-physics](https://www.toptal.com/game/video-game-physics-part-i-an-introduction-to-rigid-body-dynamics).
+
2.5.4 Adding collision detection to the game framework
+
2.5.4 Adding collision detection to the game framework
-Our previous lesson enabled us to animate balls in the game framework
-([this example](https://jsbin.com/tehuve/edit)).
-
-Now we can add the functionality presented in the last lesson, to
-perform collision tests between a circle and a rectangle. It will be
-called 60 times/s when we update the position of the balls. If there is
-a collision between a ball (circle) and the monster (rectangle), we set
-the ball color to red.
-
-[Try the example at JsBin!](https://jsbin.com/bohebe/edit?js,output)
-
+
Our previous lesson enabled us to animate balls in the game framework (this example).
+
Now we can add the functionality presented in the last lesson, to perform collision tests between a circle and a rectangle. It will be called 60 times/s when we update the position of the balls. If there is a collision between a ball (circle) and the monster (rectangle), we set the ball color to red.
1. function updateBalls(delta) {
2. // for each ball in the array
-3. for(var i=0; i < ballArray.length; i++) {
+3. for(var i=0; i < ballArray.length; i++) {
4. var ball = ballArray[i];
5.
6. // 1) move the ball
@@ -11349,230 +10792,212 @@ the ball color to red.
15. ball.x, ball.y, ball.radius)) {
16.
17. //change the color of the ball
-18. ball.color = 'red';
+18. ball.color = 'red';
19. }
20.
21. // 3) draw the ball
22. ball.draw();
23. }
-24. }
-```
+24. }
-The only additions are: lines 13-19 in the updateBalls function, and
-the circRectsOverlap function!
-
+
The only additions are: lines 13-19 in the updateBalls
+function, and the circRectsOverlap function!
-
2.6.1 Introduction
+
2.6 Sprite-based Animation
-In this lesson, we learn how to animate images - which are known as
-"sprites". This technique uses components from a collection of animation
+
In this lesson, we learn how to animate images - which are known as
+“sprites”. This technique uses components from a collection of animation
frames. By drawing different component images, rapidly,
-one-after-the-other, we obtain an animation effect.
-
-Here is an example of a spritesheet, where each line animates a woman
-walking in a particular direction:
-
-
-
-
-
-
-
-The first line corresponds to the direction we called "south", the
-second "south west", the third "west", etc. The 8 lines cover movement
-in all eight cardinal directions.
-
-Each line is composed of 13 small images which together comprise an
-"animated" sprite. If we draw each of the 13 animations of the first
+one-after-the-other, we obtain an animation effect.
+
Here is an example of a spritesheet, where each
+line animates a woman walking in a particular direction:
+
+
+
+
+
+
The first line corresponds to the direction we called “south”, the
+second “south west”, the third “west”, etc. The 8 lines cover movement
+in all eight cardinal directions.
+
Each line is composed of 13 small images which together comprise an
+“animated” sprite. If we draw each of the 13 animations of the first
line, in turn; we will see a woman who seems to move towards the screen.
And if we draw each sprite a little closer to the bottom of the screen,
we obtain a woman who appears to approach the bottom of the screen,
-swinging her arms and legs, as she walks!
-
-Try it yourself: here is [a quick and dirty example to try at
-JSBin](https://jsbin.com/jokodod/edit?html,js,console,output) working
-with the above sprite sheet. Use the arrow keys and take a look! We
-accentuated the movement by changing the scale of the sprite as the
-woman moves up (further from us) or down (closer to us).
-
-
-
-
-
-
-
-We have not yet investigated how this works, nor have we built it into
-the small game engine we started to build in earlier chapters. First,
-let's explain how to use "sprites" in JavaScript and canvas.
-
-
-
2.6.2 Different sorts of sprite sheets
-
-There are different sorts of sprite sheets. See some examples below.
+swinging her arms and legs, as she walks!
+
Try it yourself: here is a quick and
+dirty example to try at JSBin working with the above sprite sheet.
+ Use the arrow keys and take a look! We accentuated the movement by
+changing the scale of the sprite as the woman moves up (further from us)
+or down (closer to us).
+
+
+
+
+
We have not yet investigated how this works, nor have we built it
+into the small game engine we started to build in earlier chapters.
+First, let’s explain how to use “sprites” in JavaScript and canvas.
+
+
2.6.2 Different sorts of sprite sheets
+
+
There are different sorts of sprite sheets. See some examples below.
Multiple postures on a single sprite sheet
-A sprite sheet with different "sprite" sets that correspond to different
-"postures": this is the case for the walking woman we just saw in the
-previous lesson. This sprite sheet contains 8 different sets of sprites,
-or postures, each corresponding to a direction. In this example, each
-posture comprises exactly 13 sprites, aligned in a single row across the
-sprite sheet.
-
-
+
A sprite sheet with different “sprite” sets that correspond to different “postures”: this is the case for the walking woman we just saw in the previous lesson.
+
This sprite sheet contains 8 different sets of sprites, or postures, each corresponding to a direction.
+
In this example, each posture comprises exactly 13 sprites, aligned in a single row across the sprite sheet.
-
-
-
-
+
+
One posture per sprite sheet
-Some sprite sheets have a single sprite set, spreading over multiple
-lines; like this walking robot:
-
-
-
-
-
-
-
-This is an example that you will see a lot around the Internet, in many
-sprite sheets. For the full animation of the robot, we will need
-multiple sprite sheets: one for each posture.
-
-As another example, here is the "jumping robot" sprite sheet:
-
-
-
-
-
-
-
-Whereas the walking robot posture is made of 16 sprites, the jumping
-robot needs 26!
+
Some sprite sheets have a single sprite set, spreading over multiple lines; like this walking robot:
+
+
+
+
+
This is an example that you will see a lot around the Internet, in
+many sprite sheets. For the full animation of the robot, we will need
+multiple sprite sheets: one for each posture.
+
As another example, here is the “jumping robot” sprite sheet:
+
+
+
+
+
Whereas the walking robot posture is made of 16 sprites, the jumping robot needs 26!
Hybrid sprite sheets
-You will also find sprite sheets that contain completely different sets
-of sprites (this one comes from [the famous Gridrunner IOS game by Jeff
-Minter](https://www.youtube.com/watch?v=1tLNcj1ygFA)):
-
+
-
-So, when we think about writing a "sprite engine", we need to consider
-how to support different layouts of sprite sheet.
+
+
+
So, when we think about writing a “sprite engine”, we need to
+consider how to support different layouts of sprite sheet.
-
2.6.3 Sprite extraction and animation
+
2.6.3 Sprite extraction and animation
Principle
-Before doing anything interesting with the sprites, we need to:
-
-1. Load the sprite sheet(s),
+
Before doing anything interesting with the sprites, we need to:
-2. Extract the different postures and store them in an array of
- sprites,
-
-3. Choose the appropriate one, and draw it within the animation loop,
- taking into account elapsed time. We cannot draw a different image
- of the woman walking at 60 updates per second. We will have to
- create a realistic "delay" between each change of sprite image.
+
+
Load the sprite sheet(s),
+
Extract the different postures and store them in an array of sprites,
+
Choose the appropriate one, and draw it within the animation loop, taking into account elapsed time. We cannot draw a different image of the woman walking at 60 updates per second. We will have to create a realistic “delay” between each change of sprite image.
+
-In this lesson, let's construct an interactive tool to present the
-principles of sprite extraction and animation.
+
In this lesson, let’s construct an interactive tool to present the principles of sprite extraction and animation.
Example #1
-In this example, we'll move the slider to extract the sprite indicated
-by the slider value. See the red rectangle? This is the sprite image
-currently selected! When you move the slider, the corresponding sprite
-is drawn in the small canvas. As you move the slider from one to the
-next, see how the animation is created?
-
-[Try it at JSBin](https://jsbin.com/yukacep/edit?html,js,output):
-
-
-
-
-
-
+
In this example, we’ll move the slider to extract the sprite
+indicated by the slider value.
+
See the red rectangle?
+
This is the sprite image currently selected!
+
When you move the slider, the corresponding sprite is drawn in the small canvas.
+
As you move the slider from one to the next, see how the animation is created?
-Notice that we use an <input type="range"> to select the current
-sprite, and we have two canvases: a small one for displaying the
+
Notice that we use an <input type=“range”> to select the
+current sprite, and we have two canvases: a small one for displaying the
currently-selected sprite, and a larger one that contains the sprite
sheet and in which we draw a red square to highlight the selected
-sprite.
-
-Here's an extract from the JavaScript. You don't have to understand all
-the details, just look at the part in bold which extracts the individual
-sprites:
+sprite.
+
Here’s an extract from the JavaScript. You don’t have to understand
+all the details, just look at the part in bold which extracts the
+individual sprites:
- JavaScript source code extract!
-
-```
-1. var SPRITE_WIDTH = 48; // Characteristics of the sprites and
- > spritesheet
+ JavaScript code!
+
+
1. var SPRITE_WIDTH = 48; // Characteristics of the sprites and
+ > spritesheet
2. var SPRITE_HEIGHT = 92;
3. var NB_ROWS = 8;
4. var NB_FRAMES_PER_POSTURE = 13;
@@ -11583,17 +11008,17 @@ sprites:
9. var canvas, canvasSpriteSheet, ctx1, ctx2;
10.
11. window.onload = function() {
-12. canvas = document.getElementById("canvas");
-13. ctx1 = canvas.getContext("2d");
-14. canvasSpriteSheet = document.getElementById("spritesheet");
-15. ctx2 = canvasSpriteSheet.getContext("2d");
+12. canvas = document.getElementById("canvas");
+13. ctx1 = canvas.getContext("2d");
+14. canvasSpriteSheet = document.getElementById("spritesheet");
+15. ctx2 = canvasSpriteSheet.getContext("2d");
16.
-17. xField = document.querySelector("#x");
-18. yField = document.querySelector("#y");
-19. wField = document.querySelector("#width");
-20. hField = document.querySelector("#height");
-21. spriteSelect = document.querySelector("#spriteSelect");
-22. spriteNumber = document.querySelector("#spriteNumber");
+17. xField = document.querySelector("#x");
+18. yField = document.querySelector("#y");
+19. wField = document.querySelector("#width");
+20. hField = document.querySelector("#height");
+21. spriteSelect = document.querySelector("#spriteSelect");
+22. spriteNumber = document.querySelector("#spriteNumber");
23.
24. // Update values of the input fields in the page
25. wField.value = SPRITE_WIDTH;
@@ -11601,18 +11026,18 @@ sprites:
27. xField.value = 0;
28. yField.value = 0;
29. // Set attributes for the slider depending on the number of
- > sprites on the
+ > sprites on the
30. // sprite sheet
31. spriteSelect.min = 0;
32. spriteSelect.max=NB_ROWS*NB_FRAMES_PER_POSTURE - 1;
33. // By default the slider is disabled until the sprite sheet is
- > fully loaded
+ > fully loaded
34. spriteSelect.disabled = true;
35. spriteNumber.innerHTML=0;
36.
37. // Load the spritesheet
38. spritesheet = new Image();
-39. spritesheet.src="https://i.imgur.com/3VesWqx.webp";
+39. spritesheet.src="https://i.imgur.com/3VesWqx.png";
40.
41. // Called when the spritesheet has been loaded
42. spritesheet.onload = function() {
@@ -11625,36 +11050,32 @@ sprites:
49.
50. // Draw the whole spritesheet
51. ctx2.drawImage(spritesheet, 0, 0);
-52. // Draw the first sprite in the big canvas, corresponding to
- > sprite 0
+52. // Draw the first sprite in the big canvas, corresponding to > sprite 0
53. // wireframe rectangle in the sprite sheet
-54. > drawWireFrameRect(ctx2, 0 , 0, SPRITE_WIDTH, SPRITE_HEIGHT, 'red', 3);
-55. > // small canvas, draw sub image corresponding to sprite 0
+54. > drawWireFrameRect(ctx2, 0 , 0, SPRITE_WIDTH, SPRITE_HEIGHT, 'red', 3);
+55. > // small canvas, draw sub image corresponding to sprite 0
56. ctx1.drawImage(spritesheet, 0, 0, SPRITE_WIDTH, SPRITE_HEIGHT,
57. 0, 0, SPRITE_WIDTH, SPRITE_HEIGHT);
58. };
59.
60. // input listener on the slider
61. spriteSelect.oninput = function(evt) {
-62. // Current sprite number from 0 to NB_FRAMES_PER_POSTURE *
- > NB_ROWS
+62. // Current sprite number from 0 to NB_FRAMES_PER_POSTURE * > NB_ROWS
63. var index = spriteSelect.value;
64.
-65. // Computation of the x and y position that corresponds to
- > the sprite
+65. // Computation of the x and y position that corresponds to > the sprite
66. // number index as selected by the slider
67. var x = index * SPRITE_WIDTH % spritesheet.width;
-68. > var y = Math.floor(index / NB_FRAMES_PER_POSTURE) * SPRITE_HEIGHT;
+68. > var y = Math.floor(index / NB_FRAMES_PER_POSTURE) * SPRITE_HEIGHT;
69.
70. // Update fields
71. xField.value = x;
72. yField.value = y;
73.
-74. // Clear big canvas, draw wireframe rect at x, y, redraw
- > stylesheet
-75. > ctx2.clearRect(0, 0, canvasSpriteSheet.width, canvasSpriteSheet.height);
+74. // Clear big canvas, draw wireframe rect at x, y, redraw > stylesheet
+75. > ctx2.clearRect(0, 0, canvasSpriteSheet.width, canvasSpriteSheet.height);
76. ctx2.drawImage(spritesheet, 0, 0);
-77. > drawWireFrameRect(ctx2, x , y, SPRITE_WIDTH, SPRITE_HEIGHT, 'red', 3);
+77. > drawWireFrameRect(ctx2, x , y, SPRITE_WIDTH, SPRITE_HEIGHT, 'red', 3);
78.
79. // Draw the current sprite in the small canvas
80. ctx1.clearRect(0, 0, SPRITE_WIDTH, SPRITE_HEIGHT);
@@ -11673,80 +11094,55 @@ sprites:
93. ctx.strokeRect(x , y, w, h);
94. ctx.restore();
95. }
-96.
-```
+96.
Explanations:
-- Lines 1-4: characteristics of the sprite sheet. How many rows,
- i.e., how many sprites per row, etc.
-
-- Lines 11-39: initializations that run just after the page has been
- loaded. We first get the canvas and contexts. Then we set the
- minimum and maximum values of the slider (an <input type=range>)
- at lines 31-32, and disable it at line 34 (we cannot slide it
- before the sprite sheet image has been loaded). We display the
- current sprite number 0 in the