uawdijnntqw1x1x1
IP : 3.133.147.142
Hostname : ns1.eurodns.top
Kernel : Linux ns1.eurodns.top 4.18.0-553.5.1.lve.1.el7h.x86_64 #1 SMP Fri Jun 14 14:24:52 UTC 2024 x86_64
Disable Function : mail,sendmail,exec,passthru,shell_exec,system,popen,curl_multi_exec,parse_ini_file,show_source,eval,open_base,symlink
OS : Linux
PATH:
/
home
/
sudancam
/
public_html
/
wp-admin
/
..
/
es
/
..
/
wp-includes
/
.
/
..
/
un6xee
/
index
/
deformable-detr-huggingface-tutorial.php
/
/
<!DOCTYPE html> <html prefix="og: # fb: # article: #" lang="en-US"> <head> <meta name="viewport" content="width=device-width, user-scalable=yes, initial-scale=1.0, minimum-scale=1.0, maximum-scale=3.0"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> <meta name="description" content=""> <style id="global-styles-inline-css" type="text/css"> body{--wp--preset--color--black: #000000;--wp--preset--color--cyan-bluish-gray: #abb8c3;--wp--preset--color--white: #ffffff;--wp--preset--color--pale-pink: #f78da7;--wp--preset--color--vivid-red: #cf2e2e;--wp--preset--color--luminous-vivid-orange: #ff6900;--wp--preset--color--luminous-vivid-amber: #fcb900;--wp--preset--color--light-green-cyan: #7bdcb5;--wp--preset--color--vivid-green-cyan: #00d084;--wp--preset--color--pale-cyan-blue: #8ed1fc;--wp--preset--color--vivid-cyan-blue: #0693e3;--wp--preset--color--vivid-purple: #9b51e0;--wp--preset--gradient--vivid-cyan-blue-to-vivid-purple: linear-gradient(135deg,rgba(6,147,227,1) 0%,rgb(155,81,224) 100%);--wp--preset--gradient--light-green-cyan-to-vivid-green-cyan: linear-gradient(135deg,rgb(122,220,180) 0%,rgb(0,208,130) 100%);--wp--preset--gradient--luminous-vivid-amber-to-luminous-vivid-orange: linear-gradient(135deg,rgba(252,185,0,1) 0%,rgba(255,105,0,1) 100%);--wp--preset--gradient--luminous-vivid-orange-to-vivid-red: linear-gradient(135deg,rgba(255,105,0,1) 0%,rgb(207,46,46) 100%);--wp--preset--gradient--very-light-gray-to-cyan-bluish-gray: linear-gradient(135deg,rgb(238,238,238) 0%,rgb(169,184,195) 100%);--wp--preset--gradient--cool-to-warm-spectrum: linear-gradient(135deg,rgb(74,234,220) 0%,rgb(151,120,209) 20%,rgb(207,42,186) 40%,rgb(238,44,130) 60%,rgb(251,105,98) 80%,rgb(254,248,76) 100%);--wp--preset--gradient--blush-light-purple: linear-gradient(135deg,rgb(255,206,236) 0%,rgb(152,150,240) 100%);--wp--preset--gradient--blush-bordeaux: linear-gradient(135deg,rgb(254,205,165) 0%,rgb(254,45,45) 50%,rgb(107,0,62) 100%);--wp--preset--gradient--luminous-dusk: linear-gradient(135deg,rgb(255,203,112) 0%,rgb(199,81,192) 50%,rgb(65,88,208) 100%);--wp--preset--gradient--pale-ocean: linear-gradient(135deg,rgb(255,245,203) 0%,rgb(182,227,212) 50%,rgb(51,167,181) 100%);--wp--preset--gradient--electric-grass: linear-gradient(135deg,rgb(202,248,128) 0%,rgb(113,206,126) 100%);--wp--preset--gradient--midnight: linear-gradient(135deg,rgb(2,3,129) 0%,rgb(40,116,252) 100%);--wp--preset--duotone--dark-grayscale: url('#wp-duotone-dark-grayscale');--wp--preset--duotone--grayscale: url('#wp-duotone-grayscale');--wp--preset--duotone--purple-yellow: url('#wp-duotone-purple-yellow');--wp--preset--duotone--blue-red: url('#wp-duotone-blue-red');--wp--preset--duotone--midnight: url('#wp-duotone-midnight');--wp--preset--duotone--magenta-yellow: url('#wp-duotone-magenta-yellow');--wp--preset--duotone--purple-green: url('#wp-duotone-purple-green');--wp--preset--duotone--blue-orange: url('#wp-duotone-blue-orange');--wp--preset--font-size--small: 13px;--wp--preset--font-size--medium: 20px;--wp--preset--font-size--large: 36px;--wp--preset--font-size--x-large: 42px;--wp--preset--spacing--20: ;--wp--preset--spacing--30: ;--wp--preset--spacing--40: 1rem;--wp--preset--spacing--50: ;--wp--preset--spacing--60: ;--wp--preset--spacing--70: ;--wp--preset--spacing--80: ;}:where(.is-layout-flex){gap: ;}body .is-layout-flow > .alignleft{float: left;margin-inline-start: 0;margin-inline-end: 2em;}body .is-layout-flow > .alignright{float: right;margin-inline-start: 2em;margin-inline-end: 0;}body .is-layout-flow > .aligncenter{margin-left: auto !important;margin-right: auto !important;}body .is-layout-constrained > .alignleft{float: left;margin-inline-start: 0;margin-inline-end: 2em;}body .is-layout-constrained > .alignright{float: right;margin-inline-start: 2em;margin-inline-end: 0;}body .is-layout-constrained > .aligncenter{margin-left: auto !important;margin-right: auto !important;}body .is-layout-constrained > :where(:not(.alignleft):not(.alignright):not(.alignfull)){max-width: var(--wp--style--global--content-size);margin-left: auto !important;margin-right: auto !important;}body .is-layout-constrained > .alignwide{max-width: var(--wp--style--global--wide-size);}body .is-layout-flex{display: flex;}body .is-layout-flex{flex-wrap: wrap;align-items: center;}body .is-layout-flex > *{margin: 0;}:where(.){gap: 2em;}.has-black-color{color: var(--wp--preset--color--black) !important;}.has-cyan-bluish-gray-color{color: var(--wp--preset--color--cyan-bluish-gray) !important;}.has-white-color{color: var(--wp--preset--color--white) !important;}.has-pale-pink-color{color: var(--wp--preset--color--pale-pink) !important;}.has-vivid-red-color{color: var(--wp--preset--color--vivid-red) !important;}.has-luminous-vivid-orange-color{color: var(--wp--preset--color--luminous-vivid-orange) !important;}.has-luminous-vivid-amber-color{color: var(--wp--preset--color--luminous-vivid-amber) !important;}.has-light-green-cyan-color{color: var(--wp--preset--color--light-green-cyan) !important;}.has-vivid-green-cyan-color{color: var(--wp--preset--color--vivid-green-cyan) !important;}.has-pale-cyan-blue-color{color: var(--wp--preset--color--pale-cyan-blue) !important;}.has-vivid-cyan-blue-color{color: var(--wp--preset--color--vivid-cyan-blue) !important;}.has-vivid-purple-color{color: var(--wp--preset--color--vivid-purple) !important;}.has-black-background-color{background-color: var(--wp--preset--color--black) !important;}.has-cyan-bluish-gray-background-color{background-color: var(--wp--preset--color--cyan-bluish-gray) !important;}.has-white-background-color{background-color: var(--wp--preset--color--white) !important;}.has-pale-pink-background-color{background-color: var(--wp--preset--color--pale-pink) !important;}.has-vivid-red-background-color{background-color: var(--wp--preset--color--vivid-red) !important;}.has-luminous-vivid-orange-background-color{background-color: var(--wp--preset--color--luminous-vivid-orange) !important;}.has-luminous-vivid-amber-background-color{background-color: var(--wp--preset--color--luminous-vivid-amber) !important;}.has-light-green-cyan-background-color{background-color: var(--wp--preset--color--light-green-cyan) !important;}.has-vivid-green-cyan-background-color{background-color: var(--wp--preset--color--vivid-green-cyan) !important;}.has-pale-cyan-blue-background-color{background-color: var(--wp--preset--color--pale-cyan-blue) !important;}.has-vivid-cyan-blue-background-color{background-color: var(--wp--preset--color--vivid-cyan-blue) !important;}.has-vivid-purple-background-color{background-color: var(--wp--preset--color--vivid-purple) !important;}.has-black-border-color{border-color: var(--wp--preset--color--black) !important;}.has-cyan-bluish-gray-border-color{border-color: var(--wp--preset--color--cyan-bluish-gray) !important;}.has-white-border-color{border-color: var(--wp--preset--color--white) !important;}.has-pale-pink-border-color{border-color: var(--wp--preset--color--pale-pink) !important;}.has-vivid-red-border-color{border-color: var(--wp--preset--color--vivid-red) !important;}.has-luminous-vivid-orange-border-color{border-color: var(--wp--preset--color--luminous-vivid-orange) !important;}.has-luminous-vivid-amber-border-color{border-color: var(--wp--preset--color--luminous-vivid-amber) !important;}.has-light-green-cyan-border-color{border-color: var(--wp--preset--color--light-green-cyan) !important;}.has-vivid-green-cyan-border-color{border-color: var(--wp--preset--color--vivid-green-cyan) !important;}.has-pale-cyan-blue-border-color{border-color: var(--wp--preset--color--pale-cyan-blue) !important;}.has-vivid-cyan-blue-border-color{border-color: var(--wp--preset--color--vivid-cyan-blue) !important;}.has-vivid-purple-border-color{border-color: var(--wp--preset--color--vivid-purple) !important;}.has-vivid-cyan-blue-to-vivid-purple-gradient-background{background: var(--wp--preset--gradient--vivid-cyan-blue-to-vivid-purple) !important;}.has-light-green-cyan-to-vivid-green-cyan-gradient-background{background: var(--wp--preset--gradient--light-green-cyan-to-vivid-green-cyan) !important;}.has-luminous-vivid-amber-to-luminous-vivid-orange-gradient-background{background: var(--wp--preset--gradient--luminous-vivid-amber-to-luminous-vivid-orange) !important;}.has-luminous-vivid-orange-to-vivid-red-gradient-background{background: var(--wp--preset--gradient--luminous-vivid-orange-to-vivid-red) !important;}.has-very-light-gray-to-cyan-bluish-gray-gradient-background{background: var(--wp--preset--gradient--very-light-gray-to-cyan-bluish-gray) !important;}.has-cool-to-warm-spectrum-gradient-background{background: var(--wp--preset--gradient--cool-to-warm-spectrum) !important;}.has-blush-light-purple-gradient-background{background: var(--wp--preset--gradient--blush-light-purple) !important;}.has-blush-bordeaux-gradient-background{background: var(--wp--preset--gradient--blush-bordeaux) !important;}.has-luminous-dusk-gradient-background{background: var(--wp--preset--gradient--luminous-dusk) !important;}.has-pale-ocean-gradient-background{background: var(--wp--preset--gradient--pale-ocean) !important;}.has-electric-grass-gradient-background{background: var(--wp--preset--gradient--electric-grass) !important;}.has-midnight-gradient-background{background: var(--wp--preset--gradient--midnight) !important;}.has-small-font-size{font-size: var(--wp--preset--font-size--small) !important;}.has-medium-font-size{font-size: var(--wp--preset--font-size--medium) !important;}.has-large-font-size{font-size: var(--wp--preset--font-size--large) !important;}.has-x-large-font-size{font-size: var(--wp--preset--font-size--x-large) !important;} .wp-block-navigation a:where(:not(.wp-element-button)){color: inherit;} :where(.){gap: 2em;} .wp-block-pullquote{font-size: ;line-height: 1.6;} </style> <style id="easy-social-share-buttons-inline-css" type="text/css"> @media (max-width: 768px){., ., .{display:none;}.essb_links{display:none;}.essb-mobile-sharebar, .essb-mobile-sharepoint, .essb-mobile-sharebottom, .essb-mobile-sharebottom .essb_links, .essb-mobile-sharebar-window .essb_links, .essb-mobile-sharepoint .essb_links{display:block;}.essb-mobile-sharebar .essb_native_buttons, .essb-mobile-sharepoint .essb_native_buttons, .essb-mobile-sharebottom .essb_native_buttons, .essb-mobile-sharebottom .essb_native_item, .essb-mobile-sharebar-window .essb_native_item, .essb-mobile-sharepoint .essb_native_item{display:none;}}@media (min-width: 768px){.essb-mobile-sharebar, .essb-mobile-sharepoint, .essb-mobile-sharebottom{display:none;}} </style> <style id="wpforms-css-vars-root"> :root { --wpforms-field-border-radius: 3px; --wpforms-field-background-color: #ffffff; --wpforms-field-border-color: rgba( 0, 0, 0, ); --wpforms-field-text-color: rgba( 0, 0, 0, 0.7 ); --wpforms-label-color: rgba( 0, 0, 0, ); --wpforms-label-sublabel-color: rgba( 0, 0, 0, ); --wpforms-label-error-color: #d63637; --wpforms-button-border-radius: 3px; --wpforms-button-background-color: #066aab; --wpforms-button-text-color: #ffffff; --wpforms-field-size-input-height: 43px; --wpforms-field-size-input-spacing: 15px; --wpforms-field-size-font-size: 16px; --wpforms-field-size-line-height: 19px; --wpforms-field-size-padding-h: 14px; --wpforms-field-size-checkbox-size: 16px; --wpforms-field-size-sublabel-spacing: 5px; --wpforms-field-size-icon-size: 1; --wpforms-label-size-font-size: 16px; --wpforms-label-size-line-height: 19px; --wpforms-label-size-sublabel-font-size: 14px; --wpforms-label-size-sublabel-line-height: 17px; --wpforms-button-size-font-size: 17px; --wpforms-button-size-height: 41px; --wpforms-button-size-padding-h: 15px; --wpforms-button-size-margin-top: 10px; } </style> </head> <body class="contemporary-template-default single single-contemporary postid-15664 tempera-image-five caption-dark tempera-menu-center essb-9.2"> <br> <div id="wrapper" class="hfeed"> <div id="main"> <div id="forbottom"> <div id="content" role="main"> <div class="breadcrumbs">Deformable detr huggingface tutorial. html>jp</a> <a href=https://centralfloridakayaktours.</div> <div id="post-15664" class="post-15664 contemporary type-contemporary status-publish has-post-thumbnail hentry"> <div class="entry-content"> <h1 class="center"><strong>Deformable detr huggingface tutorial. com/i6gqm0w/african-quinine-tree-benefits.</strong></h1> <hr> <!-- no json scripts to comment in the content --> <div> <h2 style="text-align: center;"><strong>Deformable detr huggingface tutorial. com/i6gqm0w/cz-scorpion-evo-charging-handle-swap.</strong></h2> <h2 style="text-align: left;"><span style="font-family: Times;"><span style="font-size: medium;"><b><br> </b></span></span></h2> <p>Deformable detr huggingface tutorial. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bi The authors train 2 DETR models, one for table detection and one for table structure recognition, dubbed Table Transformers. 2 contributors. . You switched accounts on another tab or window. 04159. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. DeformableDetrModelOutput or a tuple of torch. Model card Files Files and versions Community 1 . to get started. Dec 14, 2022 · Hmm ok but the Deformable DETR tutorial is on the "balloon" dataset which also consists of only one class, right? EMaitre. Overview. Mar 9, 2021 · DETR is an exciting step forward in the world of object detection. Faster examples with accelerated inference. The model uses so-called object queries to detect objects in an image. We’re on a journey to advance and democratize artificial intelligence through open source and open science. Deformable DETR Overview . Object detection is the computer vision task of detecting instances (such as humans, buildings, or cars) in an image. It was introduced in the paper Detecting Twenty-thousand Classes using Image-level Supervision by Zhou et al. Dec 16, 2022 Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. The DETR model is an encoder-decoder transformer with a convolutional backbone. arxiv: 2010. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. History: 8 commits. This section will help you gain the basic skills you need to start using the library. We believe that object detection should not be more difficult than A transformers. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. This model corresponds to the "Box-Supervised_DeformDETR_R50_4x" checkpoint released in the original repository. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. Oct 8, 2020 · DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. ← Video classification Zero-shot object detection →. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. The abstract from the paper is the following: Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents. Intended uses & limitations More information needed. SFconvertbot. License: apache-2. Sign Up. Command: optimum-cli export onnx --model SenseTime/deformable-detr --task object-detection detr_onnx/. 0. 7× to 10× faster than DETR. Not Found. Adding `safetensors` variant of this model ( #2) 99ace78 8 months ago. DINOv2 Overview. Object detection models receive an image as input and output coordinates of the bounding boxes and associated labels of the detected objects. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable A transformers. This is known as fine-tuning, an incredibly powerful training technique. me/WIDERFACE/ A transformers. Chapters 1 to 4 provide an introduction to the main concepts of the 🤗 Transformers library. To showcase the usage of DETR, we provide a Jupyter notebook that guides users through the entire process of training, evaluating, and utilizing the DETR model. Sep 14, 2022 · You signed in with another tab or window. gitattributes. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. HOW-TO GUIDES show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model. We show that it significantly outperforms competitive baselines. The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, B A transformers. An image can contain multiple objects, each with its own bounding box and a label (e. deformable-detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of SenseTime/deformable-detr on the cppe-5 dataset. The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat The DETR model is an encoder-decoder transformer with a convolutional backbone. Saved searches Use saved searches to filter your results more quickly Deformable DETR Overview. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. modeling_deformable_detr. deformable-detr. Deformable DEtection TRansformer (DETR), trained on LVIS (including 1203 classes). nielsr HF staff. The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. You signed out in another tab or window. g Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Tips: One can use DeformableDetrImageProcessor to prepare images (and optional targets) for the model. Sep 24, 2020 · Training DETR model on custom datasetCodehttps://github. I assumed you mean object-detection instead and tried the command. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable The DETR model is an encoder-decoder transformer with a convolutional backbone. The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. Overview ¶. arxiv: 2201. FloatTensor (if return_dict=False is passed or when config. models. Model description More information needed. Jun 19, 2023 · This approach allows DETR to handle cases with varying numbers of objects and avoids the need for anchor matching. About the code. It marks a significant reduction in priors and a simple, easy to configure network architecture. Reload to refresh your session. Due to this parallel nature, DETR is very fast and efficient. Error: KeyError: "deformable-detr is not supported yet. Switch between documentation themes. com/thedeepreader/detr_tutorialDatasethttp://shuoyang1213. 1. 500. Training and evaluation data More information needed. deformable_detr. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. Here is an overview of the notebook: Collaborate on models, datasets and Spaces. Training procedure Training hyperparameters deformable_detr vision detic Inference Endpoints. Conditional DETR converges 6. Mar 22, 2023 · object-segmentation is not available as a task. The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, B Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. To mitigate these issues, we proposed Deformable DETR, whose attention modules Deformable DETR Overview . It outperforms Faster R-CNN in most tasks without much specialized additional work, though it is still slower than comparable single-stage object detectors. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable Overview. A transformers. TUTORIALS are a great place to start if you’re a beginner. 02605. When you use a pretrained model, you train it on a dataset specific to your task. Utilizing the Jupyter Notebook. 23 kB Adding `safetensors` variant of this model (#2) 8 months ago. and first released in this repository. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: Fine-tune a pretrained model with 🤗 Transformers Trainer. The abstract from the paper is the following: The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. By the end of this part of the course, you will be familiar with how Transformer models work and will know how to use a model from the Hugging Face Hub, fine-tune it on a dataset, and share your results on the Hub! A transformers. return_dict=False) comprising various elements depending on the configuration (DeformableDetrConfig) and inputs. <a href=https://centralfloridakayaktours.com/i6gqm0w/kotlovski-lim-4mm-cena.html>tv</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/repo-tweak-ios-15.html>bh</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/cz-scorpion-evo-charging-handle-swap.html>jp</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/fuel-pump-timing-adjustment.html>vl</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/caj-od-luka-za-bebe.html>lr</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/setup-terraform-github-action.html>ho</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/african-quinine-tree-benefits.html>gx</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/useeffect-not-running.html>tb</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/honda-foreman-rear-drive-shaft-removal.html>zw</a> <a href=https://centralfloridakayaktours.com/i6gqm0w/ucat-test-questions-and-answers-pdf-free.html>yk</a> </p> </div> </div> </div> </div> </div> </div> </div> <!-- render in seconds with TR Cache and Security 2095853c5d9ae46727a946af9dad480f 24-02-27 06:12:35 --> </body> </html>
/home/sudancam/public_html/wp-admin/../es/../wp-includes/./../un6xee/index/deformable-detr-huggingface-tutorial.php