uawdijnntqw1x1x1
IP : 3.147.60.148
Hostname : ns1.eurodns.top
Kernel : Linux ns1.eurodns.top 4.18.0-553.5.1.lve.1.el7h.x86_64 #1 SMP Fri Jun 14 14:24:52 UTC 2024 x86_64
Disable Function : mail,sendmail,exec,passthru,shell_exec,system,popen,curl_multi_exec,parse_ini_file,show_source,eval,open_base,symlink
OS : Linux
PATH:
/
home
/
sudancam
/
public_html
/
0d544
/
..
/
ph
/
..
/
.
/
..
/
www
/
soon
/
..
/
un6xee
/
index
/
vgg-face-dataset-download.php
/
/
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta name="viewport" content="viewport-fit=cover, width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=5.0, user-scalable=yes"> <title></title> <style> @import url( url( url( url( .search-menu,#search-menu .search-placeholder{color:#fff;font-size:19px;font-family:Montserrat,sans-serif}.deskrip-body iframe,img,{max-width:100%}#search-menu .search-menu+.search-placeholder,#search-menu .:focus+.search-placeholder,.visible-xs{display:none}@media(max-width:767px){.hidden-xs{display:none}.visible-xs{display:block}}.table{border:0;border-collapse:collapse}.clearfix:after,.clearfix:before,.container:after,.container:before,.form-group:after,.form-group:before{display:table;content:" "}.input-group .form-control,.input-group .input-group-btn,.list-pagination>li a{display:table-cell;vertical-align:middle}.clearfix:after,.container:after,.form-group:after{clear:both}*,.mkl-share16 *,.mkl-share16 :after,.mkl-share16 :before,:after,:before{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}:focus{box-shadow:none}a{text-decoration:none;color:#414141}input{-webkit-appearance:none;box-shadow:none!important;-webkit-appearance:none}.input-group{display:table}.input-group .input-group-btn{width:1%}.input-group .form-control{width:100%;border:0;border-radius:0;padding:0}img{border:0;vertical-align:middle}.kolom-brand-partner,.list-alphabet>li,.pull-left{float:left}.pull-right{float:right}.list-inline,.list-unstyled{margin:0;padding:0;list-style:none}.list-inline>li{display:inline-block;vertical-align:middle}.list-pagination{display:table;margin:25px auto!important}.list-pagination>li{display:inline-block;margin-left:-1px}.list-pagination>li a{height:35px;width:35px;text-align:center;font-size:18px;color:#414141;border:1px solid #d8d8d8;font-weight:700;line-height:normal}.list-pagination>li .title{font-size:14px;padding:1px 15px 0}.list-pagination> a,.list-pagination>li:hover a{border-color:#ffcc1b;background:#ffcc1b;color:#414141}.text-right{text-align:right}.list-nav>li a,.text-left{text-align:left}.text-center{text-align:center}:focus,:hover{outline:0}.img-block img,.img-full,.list-article-img li .img-left>img{width:100%}.full-width{padding:0}h1,h2,h3,h4,h5,h6,p{line-height:;font-weight:400;margin:0}.img-block{display:block}body{margin:0;padding:0;font-family:'Open Sans',sans-serif;-webkit-text-size-adjust:100%;font-size:14px;color:#414141}.nav-overflow{width:100%;height:100%;overflow:hidden}.brilio-header{position:fixed;top:0;left:0;right:0;z-index:999}.brilio-navbar{position:relative;padding:0 60px;text-align:center;background:-moz-linear-gradient(top,#000 0,rgba(0,0,0,.5) 75%,rgba(0,0,0,0) 100%);background:-webkit-linear-gradient(top,#000 0,rgba(0,0,0,.5) 75%,rgba(0,0,0,0) 100%);background:linear-gradient(to bottom,#000 0,rgba(0,0,0,.5) 75%,rgba(0,0,0,0) 100%)}.backtop-sticky,.brilio-menu{position:fixed;background:#ffcc1b;right:0}.brilio-navbar button{position:absolute;border:0;margin:0;padding:0 15px;top:0;transition:height .5s;height:80px;cursor:pointer;background:0 0}.brilio-navbar {right:0}.brilio-navbar {left:0}.brilio-navbar .navbar-brand{display:inline-block;vertical-align:middle;height:80px;transition:height .5s}.brilio-navbar .navbar-brand img{margin-top:10px;height:60px;transition:.5s}#search-menu{display:table;width:100%;position:relative;padding:8px 15px;background:#fd1}#search-menu .search-menu{background:0 0;width:100%;height:40px;text-align:center;border:0;border-bottom:1px solid transparent;font-weight:400;position:relative;z-index:2}#search-menu .search-placeholder{position:absolute;left:0;right:0;top:0;bottom:0;text-align:center;margin:15px}#search-menu .search-placeholder .{width:20px;height:20px;background:url("") 0 0/100% auto no-repeat;display:inline-block;vertical-align:middle;margin-right:5px}#search-menu .+.search-placeholder{display:block}#search-menu .search-menu:focus{border-color:#fff}>li,>{border-right:1px solid #fff}.backtop-sticky{bottom:30px;visibility:hidden;-moz-opacity:0;-khtml-opacity:0;opacity:0;color:#fff!important;font-size:16px;font-weight:600;z-index:100;line-height:50px;-webkit-transition:bottom,visibility .5s,opacity .5s,-webkit-transform .5s;-moz-transition:bottom,visibility .5s,opacity .5s,-moz-transform .5s;-o-transition:bottom,visibility .5s,opacity .5s,-o-transform .5s;transition:bottom,visibility .5s,opacity .5s,transform .5s;border:0;padding:0}.{visibility:visible;-moz-opacity:0.5;-khtml-opacity:0.5;opacity:.5}.{opacity:0}.backtop-sticky:after{background:url("") center/20px no-repeat;float:left;content:"";width:50px;height:50px}.backtop-sticky:hover{-moz-opacity:1;-khtml-opacity:1;opacity:1;-moz-transform:translateX(0);-o-transform:translateX(0);-ms-transform:translateX(0);-webkit-transform:translateX(0);transform:translateX(0)}@media (max-width:359px){.brilio-navbar .navbar-brand img{max-width:100%;height:50px;margin-top:15px}}.nav-target{visibility:hidden;-webkit-transform:translate3d(0,-100%,0);transform:translate3d(0,-100%,0);transition:.5s}.{visibility:visible;-webkit-transform:translate3d(0,0,0);transform:translate3d(0,0,0);transition:.5s}.nav-target,x:-o-prefocus{display:none}.,x:-o-prefocus{display:block}.detail-none,x:-o-prefocus{display:none}.brilio-menu{top:0;left:0;bottom:0;z-index:999;overflow-y:scroll;-webkit-overflow-scrolling:touch;text-align:center;color:#fff}.brilio-menu .brilio-overflow{padding:20px 0}.nav-overflow,x:-o-prefocus{width:auto;height:auto;overflow:auto}.brilio-menu,x:-o-prefocus{position:absolute;bottom:auto}.list-nav>li a{font-size:22px;font-family:Montserrat,sans-serif;font-weight:700;letter-spacing:2px;padding:10px 30px;color:#fff;display:block}.list-nav> a{background:#fd1;color:#414141}.brilio-menu .box-navsubscribe{text-align:center;position:absolute;bottom:15vh;left:0;right:0;margin:0 auto}{border:0;padding:0;background-color:transparent;position:absolute;bottom:5%;left:0;right:0}.brilio-menu .box-navsubscribe h6{font-size:13px;font-weight:700;font-family:Montserrat,sans-serif;color:#fff;margin-bottom:15px}.list-nav-sosmed>li{display:inline-block;vertical-align:middle;margin:0 10px}.article-headline:first-child .img-block,.detail-article .article-headline>.img-full{padding-top:10px}.article-headline .img-margin{height:107px;background:-moz-linear-gradient(top,#000 0,rgba(0,0,0,.5) 99%,rgba(0,0,0,0) 100%);background:-webkit-linear-gradient(top,#000 0,rgba(0,0,0,.5) 99%,rgba(0,0,0,0) 100%);background:linear-gradient(to bottom,#000 0,rgba(0,0,0,.5) 99%,rgba(0,0,0,0) 100%)}.article-headline .deskrip-headline,.list-col-article>li .deskrip-bottom{background:#ffcc1b;padding:15px;position:relative;z-index:2}.article-headline .deskrip-headline .link-kategori-top,.list-col-article>li .deskrip-bottom .link-kategori-top{background:#414141;display:inline-block;padding:5px 15px;line-height:1em;font-family:Montserrat,sans-serif;font-weight:700;margin-bottom:15px;color:#fff;font-size:12px;letter-spacing:2px}.article-headline .deskrip-headline .link-kategori-top{position:absolute;left:15px;top:-22px}.article-headline .deskrip-headline .title-headline,.list-col-article>li .deskrip-bottom p{font-size:24px;font-family:'Francois One',sans-serif;line-height:}.deskrip-body{margin:15px}.deskrip-body .date{font-size:10px;color:#666;display:table;margin-bottom:10px}.deskrip-body p{font-size:15px;text-align:justify;margin-bottom:20px}.list-article-img>li,.>li:last-child{padding:15px 0;border-bottom:1px solid #ccc;margin:0 15px}.video-detail{margin-bottom:10px}.video-detail iframe{width:100%;height:250px}.list-article-img li .img-left,.list-article-img>li .deskrip-right{vertical-align:top}.list-article-img li .img-left{width:50%}.list-article-img>li .deskrip-right{position:relative;padding-bottom:15px;width:50%}.list-article-img>li{display:flex;gap:10px}.list-article-img>li:last-child{border-bottom:none}.list-article-img>li .deskrip-right .link-kategori{background:#414141;padding:5px;line-height:1em;font-family:Montserrat,sans-serif;font-weight:700;color:#fff;font-size:10px;letter-spacing:2px;margin-bottom:10px}.list-col-article>li .deskrip-right .link-kategori-top{background:#414141;display:table-caption;padding:2px 5px;font-family:Montserrat,sans-serif;font-weight:700;color:#fff;font-size:10px;letter-spacing:2px;margin-bottom:2px}.list-article-img>li .deskrip-right p,.list-col-article>li .deskrip-right p{font-family:'Francois One',sans-serif;font-size:15px}.list-article-img>li .deskrip-right .date{margin-top:10px;color:#666;display:table;font-size:12px}.iframe-video{position:relative;padding-bottom:%;padding-top:35px;height:0;overflow:hidden}.iframe-video iframe{position:absolute;top:0;left:0;width:100%;height:100%}.list-col-article{margin-top:-1px;margin-bottom:-1px;padding:3px}.list-col-article>li{float:left;padding:1px 0}.list-col-article>li:nth-child(2n){padding-right:3px;padding-left:3px;margin-bottom:1px;width:50%}.list-col-article>li:nth-child(odd){width:50%;padding-right:3px;padding-left:3px;margin-bottom:1px}.list-col-article>li .deskrip-bottom .link-kategori-top{font-size:10px;text-overflow:ellipsis;overflow:hidden;height:20px;white-space:nowrap;max-width:100%;position:absolute;top:-20px;left:0}.list-col-article>li .deskrip-bottom p{font-size:16px;height:58px;overflow:hidden}.news-title{-webkit-line-clamp:3;line-clamp:3;-webkit-box-orient:vertical;overflow:hidden;text-overflow:ellipsis;display:-webkit-box;min-height:;margin-top:5px}.brilio-footer{text-align:center;margin-top:60px;color:#fff}.brilio-footer .backtop{padding:0 15px 30px;display:block;font-weight:400;font-size:14px}.brilio-footer .backtop img{margin-right:5px;margin-top:-3px}.brilio-footer .footer-wrapper{padding:30px 15px;background:#414141;border-top:4px solid #ffcc1b;font-size:12px}.brilio-footer .list-nav-footer{margin:-15px}.brilio-footer .list-nav-footer>li{float:left;width:50%;padding:15px}.brilio-footer .list-nav-footer>li a{color:#fff;font-family:Montserrat,sans-serif;font-weight:700;font-size:15px}.copyright{display:block;margin-top:45px;margin-bottom:15px}.box-footersubscribe .list-nav-sosmed,.box-footersubscribe .list-nav-sosmed>li,.box-footersubscribe h6{display:inline-block;vertical-align:middle;font-size:12px;margin:0 5px;color:#fff}.bottom-tags-title-name{float:left;font-size:20px;font-weight:700;color:#000;text-transform:uppercase;margin-top:-17px;background-color:#fff;position:absolute;z-index:1;padding:15px 15px 5px}.deskrip-right,.list-breadcrumb,.relative{position:relative}.bottom-tags-title-line{float:left;width:100%;height:1px;border-bottom:1px solid #cdcdcd;margin-top:12px}.bottom-tags-title{width:100%;height:35px;margin-top:40px}.list-alphabet{padding:10px!important;margin-bottom:30px!important}.list-alphabet>li a{border:1px solid #fff;display:table-cell;vertical-align:middle;width:48px;height:48px;text-align:center;font-size:18px;background:#414141;color:#fff;font-weight:700;text-transform:uppercase}.list-alphabet>li .select_tag,.list-alphabet>li ,.list-alphabet>li a:hover{background:#ffcc1b;color:#fff}.title-green{font-size:18px;margin:30px 0 10px;color:#98d300;font-weight:700}.text-large{font-size:20px!important}.title-tag a{color:#414141}#wrapper-tag .list-article-berita>li:first-child,#wrapper-tag .list-article-small>li,>{border:0}#wrapper-tag .list-article-berita>li{border-top:1px solid #ececec;padding:15px}#wrapper-tag .list-article-double{border-bottom:1px solid #ececec}#wrapper-tag .article-left{width:100%;display:table-cell;vertical-align:top;line-height:normal;padding-right:10px!important;position:relative}.deskrip-right{display:table;vertical-align:top}#wrapper-tag .article-berita>li p{margin-top:-4px}#wrapper-tag .deskrip-br{display:table-cell;vertical-align:top;line-height:normal}#wrapper-tag .deskrip-text{margin:0;font-size:15px;line-height:}#wrapper-tag .deskrip-text a{color:#000;font-size:15px}#wrapper-tag .date{font-size:12px;color:#959595;float:left;width:100%;margin:10px 0 5px}.deskrip-headline .list-breadcrumb{margin:0 0 5px!important}.breadcrumb-img-link{filter:brightness(0) saturate(100%) invert(20%) sepia(0%) saturate(2494%) hue-rotate(195deg) brightness(89%) contrast(75%)}.list-breadcrumb{background:#414141;display:inline-block;padding:5px 10px;line-height:1em;font-family:Montserrat,sans-serif;font-weight:700;color:#fff;font-size:10px;letter-spacing:2px;margin:15px;height:20px}.arrow-br,.arrow-detail,.artikel-paging-number a:hover .arrow-br,> a:hover .arrow-detail{background:url("") no-repeat}.kolom-brand-add,.kolom-brand-brilio{margin-top:10px;float:left}.list-breadcrumb>li a{color:#fff}.list-breadcrumb>li:last-child a{max-width:21vh;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;display:block}@media (min-width:280px) and (max-width:320px){.list-breadcrumb>li:last-child a{max-width:13vh}}.kolom-brand{float:left;margin-right:10px;height:50px}.kolom-brand-add{font-family:"Open Sans",Helvetica,Arial;font-size:20px;color:#959595;width:30px;text-align:center;vertical-align:middle}.box-related .title-related,.style-tag{font-family:Montserrat,sans-serif;font-weight:700}.read-sinopsis{font-size:inherit;font-weight:700}.title-list .link-brand{display:block;margin:20px 0}.title-list .link-brand span{display:inline-block;vertical-align:middle;font-size:12px;color:#959595}.title-list .link-brand span img{display:inline-block;margin-left:10px;max-width:110px;max-height:50px}.deskrip-body p .copyright-img,.img-copy{font-size:13px;text-align:center;font-style:italic;padding:5px;display:block}.deskrip-body p>img{width:100%;height:auto}.box-related{padding:15px 0;margin:20px 0;border-top:1px solid #ccc;border-bottom:1px solid #ccc}.box-related .title-related{font-size:13px;letter-spacing:3px}.box-related .list-related>li{margin-top:5px}.box-related .list-related>li a{font-size:18px;font-family:'Francois One',sans-serif;line-height:}.article-box{margin:22px 15px}.article-box .title-box{font-weight:700;font-size:18px;margin-bottom:15px}.list-tag{display:table;margin:-3px}.list-tag a{float:left;font-size:15px;border:1px solid #ececec;padding:5px 10px;margin:3px}.nextprev-paging a,>{border-left:1px solid #fff}.article-full{margin:45px 0}.upnext{margin:30px 15px 0;text-align:center}.upnext p{font-size:18px;margin-bottom:15px}.nextprev-paging a{width:50%;float:left;text-align:center;font-size:15px;display:block;color:#414141;font-weight:700;padding:15px}#next-but-paging img,#prev-but-paging img{width:55px}#next-but-paging img{-ms-transform:rotate(180deg);-webkit-transform:rotate(180deg);transform:rotate(180deg)}#next-but-split{background:url("") right 15px center/auto 15px no-repeat #ffcc1b}#next-but-split:hover,#prev-but-split:hover{background-color:#f3f3f3}#prev-but-split{background:url("") 15px center/auto 15px no-repeat #ffcc1b}.img-detail-foto p{font-size:17px;color:#333;padding:5px 15px;margin:0;text-align:center}.img-detail-foto .copy{font-size:15px;color:#888;padding-top:0}{overflow:hidden;font-family:Oswald,sans-serif;margin-top:3px}>li{float:left;text-align:center}>li a{color:#000;font-weight:300;font-size:18px;height:35px;display:table-cell;vertical-align:middle;width:35px}> a{width:67px;font-size:14px}> a{background:#ed474b;width:67px;font-size:14px}> a{background:#ffcc1b;width:32px}.arrow-br,.arrow-detail{height:19px;width:11px;display:block;margin:0 auto}.,> a:hover .{background-position:-19px 0}> a,>li:hover a{background:#ffcc1b}> a:hover,>:hover{background:#000}@media (max-width:319px){>li a{width:21px}}.artikel-paging-number{background:#ffcc1b;margin-bottom:50px}.artikel-paging-number .number{display:inline-block;color:#414141;font-weight:700;font-size:15px;margin:14px 0}.artikel-paging-number .arrow-number-l a,.artikel-paging-number .arrow-number-r a{display:table-cell;vertical-align:middle;background:#ffcc1b}.artikel-paging-number .arrow-number-l a,.artikel-paging-number .arrow-number-l-popular a,.artikel-paging-number .arrow-number-r a,.artikel-paging-number .arrow-number-r-popular a{width:70px;height:51px}.arrow-number-l a,.arrow-number-l-popular a{border-right:1px solid #ececec}.arrow-number-r a,.arrow-number-r-popular a{border-left:1px solid #ececec}.arrow-number-l a:hover,.arrow-number-r a:hover{background:#f3f3f3}.mkl-share16 .list-share16>li a,.share-now .share-sosmed a{background-size:42px;background-repeat:no-repeat;width:42px;height:42px}.,.{background-position:0 0}.,.{background-position:-19px 0!important}.absolute,.style-tag{position:absolute}.style-tag{bottom:0;width:100%;z-index:1;color:#fff;background-color:#414141;padding:2px 5px;font-size:10px;letter-spacing:2px}.relative img{width:100%;object-fit:cover;height:20vh}.mkl-share16{margin:0 15px!important;overflow:hidden}.mkl-share16 .list-share16{list-style:none;margin:0 -4px;padding:0;display:table}.mkl-share16 .list-share16>li{display:table-cell;vertical-align:middle;padding:0 4px}.mkl-share16 .list-share16>li a{display:block}.mkl-share16 .list-share16>li .fb-share,.share-now .share-sosmed .fb-share{background-image:url("")}.mkl-share16 .list-share16>li .tweet-share,.share-now .share-sosmed .tweet-share{background-image:url("");background-size:43px;background-position:center}.mkl-share16 .list-share16>li .gplus-share,.share-now .share-sosmed .gplus-share{background-image:url("")}.mkl-share16 .list-share16>li .wa-share,.share-now .share-sosmed .wa-share{background-image:url("")}.mkl-share16 .list-share16>{padding-left:10px;text-align:center}.mkl-share16 .list-share16> dd,.mkl-share16 .list-share16> dt{font-family:Oswald,sans-serif!important;margin:0;padding:0;display:block;line-height:}.mkl-share16 .list-share16> dt{font-size:30px;color:#333;letter-spacing:1px}.mkl-share16 .list-share16> dd{font-size:9px;color:#333;letter-spacing:2px;margin-left:3px}.share-now{margin:22px 15px;text-align:center}.share-now h6{font-family:'Open Sans',sans-serif;margin-bottom:10px;font-size:14px;font-weight:700}.share-now .share-sosmed a{display:inline-block;vertical-align:middle;margin:0 3px} {overflow:hidden;touch-action:none}.remodal,[data-remodal-id]{display:none}.remodal-overlay{position:fixed;z-index:9999;top:-5000px;right:-5000px;bottom:-5000px;left:-5000px;display:none}.remodal-wrapper{position:fixed;z-index:10000;top:0;right:0;bottom:0;left:0;display:none;overflow:auto;text-align:center;-webkit-overflow-scrolling:touch}.remodal-wrapper:after{display:inline-block;height:100%;margin-left:;content:""}.remodal-overlay,.remodal-wrapper{backface-visibility:hidden}.remodal{position:relative;outline:0;text-size-adjust:100%}.remodal-is-initialized{display:inline-block} .remodal,.remodal-close:focus,.remodal-close:hover{color:#2b2e38}.,.{filter:blur(3px)}.remodal-overlay{background:rgba(43,46,56,.9)}.,.,.,.{animation-duration:.3s;animation-fill-mode:forwards}.{animation-name:remodal-overlay-opening-keyframes}.{animation-name:remodal-overlay-closing-keyframes}.remodal-wrapper{padding:10px 10px 0}.remodal{box-sizing:border-box;width:100%;margin-bottom:10px;padding:35px;transform:translate3d(0,0,0);background:#fff}.remodal-close,.remodal-close:before{position:absolute;top:0;left:0;display:block;width:35px}.remodal-cancel,.remodal-close,.remodal-confirm{overflow:visible;margin:0;cursor:pointer;text-decoration:none;outline:0;border:0}.{animation-name:remodal-opening-keyframes}.{animation-name:remodal-closing-keyframes}.remodal,.remodal-wrapper:after{vertical-align:middle}.remodal-close{height:35px;padding:0;transition:color .2s;color:#95979c;background:0 0}.remodal-close:before{font-family:Arial,"Helvetica CY","Nimbus Sans L",sans-serif!important;font-size:25px;line-height:35px;content:"\00d7";text-align:center}.remodal-cancel,.remodal-confirm{font:inherit;display:inline-block;min-width:110px;padding:12px 0;transition:background .2s;text-align:center;vertical-align:middle}.remodal-confirm{color:#fff;background:#81c784}.remodal-confirm:focus,.remodal-confirm:hover{background:#66bb6a}.remodal-cancel{color:#fff;background:#e57373}.remodal-cancel:focus,.remodal-cancel:hover{background:#ef5350}.remodal-cancel::-moz-focus-inner,.remodal-close::-moz-focus-inner,.remodal-confirm::-moz-focus-inner{padding:0;border:0}@keyframes remodal-opening-keyframes{from{transform:scale();opacity:0}to{transform:none;opacity:1;filter:blur(0)}}@keyframes remodal-closing-keyframes{from{transform:scale(1);opacity:1}to{transform:scale(.95);opacity:0;filter:blur(0)}}@keyframes remodal-overlay-opening-keyframes{from{opacity:0}to{opacity:1}}@keyframes remodal-overlay-closing-keyframes{from{opacity:1}to{opacity:0}}@media only screen and (min-width:641px){.remodal{max-width:700px}}.lt-ie9 .remodal-overlay{background:#2b2e38}.lt-ie9 .remodal{width:700px} .m-auto{display:block;margin:auto}figure{margin:0}@keyframes fade{100%{opacity:1}} .hero-img { opacity: 0; animation-name: fade; animation-duration: 300ms; animation-delay: 5000ms; animation-fill-mode: both; width: 100%; height: 212px; object-fit: contain; } .img-head { object-fit: cover; aspect-ratio: 16/9; } .sos{display:block;width:35px;height:35px;} .sos-tw{background:url("") center no-repeat} .sos-yt{background:url("") center no-repeat} .sos-ins{background:url("") center no-repeat} .sos-fb{background:url("") center no-repeat} .article-headline .deskrip-headline .title-headline{ font-size:26px } </style><!-- Google Tag Manager --><!-- End Google Tag Manager --> <style type="text/css"> .fb_iframe_widget_fluid_desktop iframe { min-width: 100%; position: relative; } </style> <link rel="alternate" type="application/rss+xml" href=""> <style> ., ., ., ., ., . { animation: none; } .start-quest { font-weight: 600; color: #414141; padding: 5px 25px; border: solid 1px #ffcc1b; border-radius: 3px; } .start-quest:hover { background-color: #ffcc1b; color: #fff; } .remodal { padding: 30px 0px; } .body-interactive { padding: 25px 0px; } </style> </head> <body> <div class="brilio-header"> <!--brilio-navbar--><button type="button" class="btn-main-menu" data-popup-open="navbar-menu"><img loading="lazy" src="" alt="Menu" height="20" width="30"></button> <div class="brilio-navbar"> </div> <!--end brilio-navbar--> <!--brilio-menu--> <div class="brilio-menu nav-target" data-popup="navbar-menu"> <div class="brilio-overflow"> <div id="search-menu"> <form class="" action="" method="get"> <input id="searchbar" name="inputSearch" class="search-menu error" type="text"> <div class="search-placeholder"><span class="icon-svg icon-search"></span> Search</div> </form> </div> <ul class="list-nav list-unstyled"> <li>FRONT</li> <li>VIRAL</li> <li>ENTERTAINMENT</li> <li>FOOD</li> <li>BEAUTY</li> </ul> <div class="box-navsubscribe"> <h6>SUBSCRIBE</h6> <ul class="list-nav-sosmed list-unstyled"> <li></li> <li></li> <li></li> <li></li> </ul> </div> <button class="close-menu" aria-label="close"><img loading="lazy" src="" alt="close" height="50" width="50"></button> </div> </div> <!--end brilio-menu--> </div> <!--brilio-section--> <div class="detail-article"> <div class="article-headline"> <figure class="hero-img"> <img src="" data-src="" class="img-full img-head" alt="Mixed feelings: Inong Ayu, Abimana Aryasatya's wife, will be blessed with her 4th child after 23 years of marriage" height="212" width="375"> </figure> <div class="deskrip-headline"><br> <h1 class="title-headline">Vgg face dataset download. " GitHub is where people build software. </h1> </div> <!-- NEWS PAGING TOP --> <!-- ./ NEWS PAGING TOP--> </div> <span class="img-copy pull-right">foto: Instagram/@inong_ayu</span><br> <div class="deskrip-body"> <p></p> <h2 class="read-sinopsis">Vgg face dataset download. Upload or create new data files.</h2> </div> <div class="clearfix"></div> <div class="social-box"> <div id="socials-share"> <div class="mkl-share16"> <ul class="list-share16"> <li></li> <li><span class="tweet-share"></span></li> <li><span class="wa-share"></span></li> </ul> </div> </div> </div> <div class="deskrip-body"> <span class="date"> 7 April 2024 12:56</span> <!-- item 1 --> <p><!-- prefix --><b> Vgg face dataset download. vgg16. pre-trained CNN was learned from a large face dataset containing 982,803 web images of 2622 celebrities and public figures. Use the Edit dataset card button to edit it. Identities – 110,000. The whole dataset is split to a training set 🤗 Datasets is a lightweight library providing two main features:. We got a reviewer feedback for our submitted paper. Dec 21, 2020 · Neither the models nor the dataset links are online. Oct 16, 2019 · Face Recognition with VGG-Face in Keras. README. May 1, 2018 · VGGF ace2: A dataset for recognising faces acr oss pose and age. ->. Create notebooks and keep track of their status here. ImageNet is impossible to download for an average human being from the official source. The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4. Oct 23, 2017 · In this paper, we introduce a new large-scale face dataset named VGGFace2. It has a total of 851 images which are a subset of the PASCAL VOC and has a total of 1,341 annotations. Download ZIP. ipynb - Colab. All images obtained from Flickr (Yahoo's dataset) and licensed under Creative Commons. If you require text annotation (e. The dataset consists of two versions, LRW and LRS2. Get full access to Deep Learning for Computer Vision and 60K+ other titles, with a free 10-day trial of O'Reilly. keyboard_arrow_up. Qiong Cao, Li Shen, W eidi Xie, Omkar M. from publication: Deep Face Verification for Spherical Images | Over the years, several problems regarding the Sep 9, 2023 · CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. Then save the model as face recognisation. We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3. Only the largest vgg10 is provided here. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e. The goal of this study is to create a facial recognition-based automatic attendance tracking system. For instance, Facebook [22] trained a face identification model using 500 million images of over 10 million subjects. You signed out in another tab or window. VGGFace2. Yeap, it is a great source for datasets. vgg_face_dataset_download has no bugs, it has no vulnerabilities and it has low support. The MegaFace dataset is the largest publicly available facial recognition dataset with a million faces and their respective bounding boxes. " BMVC. AN OVERVIEW OF THE VGGFACE2 A In this paper we propose a deep learning solution to age estimation from a single face image without the use of facial landmarks and introduce the IMDB-WIKI dataset, the largest public dataset of face images with age and gender labels. py ). In this paper, we introduce a new large-scale face dataset named VGGFace2. The VGGFace2 consist of a training set and a validation set. get_layer (layer_name). For each we provide cropped face tracks and the corresponding subtitles. tenancy. May 25, 2020 · Adapting VGG-16 to Our Dataset. I have used the VGG-16 model as it is a smaller model and the prediction in real-time can work on my local system without GPU. h5" checkpoint B (IJB-B) [22] datasets were released as evaluation bench-marks (only test) for face detection, recognition and cluster-ing in images and videos. corporate_fare. The dataset contains: 720K images with 10K identities (72 images per identity). VGG VGG has no overlap with some other popular benchmarks such as LFW. for audio-visual speech recognition), also consider using the LRS dataset. gz (dataset) and annotations 4 - VGG. Mar 20, 2017 · Once you have TensorFlow/Theano and Keras installed, make sure you download the source code + example images to this blog post using the “Downloads” section at the bottom of the tutorial. Apr 18, 2023 · DeepFace is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework (API) for python. md exists but content is empty. output vgg_model_new = Model (vgg_model. in addition, we perform ear VGG-SS (VGG Sound Source) is a benchmark for evaluating sound source localisation in videos. Oct 26, 2020 · VGG models are a type of CNN Architecture proposed by Karen Simonyan & Andrew Zisserman of Visual Geometry Group (VGG), Oxford University, which brought remarkable results for the ImageNet Challenge. New Organization. 2 million. SyntaxError: Unexpected token < in JSON at position 4. We can use these coordinates to extract the face. Preprocessing images Faces should be detected and cropped from images before face images are fed to this face recognizer( demo. Sep 25, 2018 · vgg_face2. Since the dataset links are no longer active on github, I have removed the model links until the dataset becomes available again. - ArminBaz/UTK-Face The Oxford-IIIT Pet dataset and annotations are roughly 800 MB in size and available for download via BitTorrent with Academic Torrents : We recommend the use of BitTorrent protocol. The weights are taken from this repository. It was introduced in our paper DigiFace-1M: 1 Million Digital Face Images for Face Recognition and can be used to train deep learning models for facial recognition. 6 images for each subject. This dataset is 20 times larger than analogous existing ones, contains 5K videos spanning over 200 To analyze traffic and optimize your experience, we serve cookies on this site. Download scientific diagram | Sample of the VGGFace dataset from publication: Exploring Transfer Learning on Face Recognition of Dark Skinned, Low Quality and Low Resource Face Data | There is a Sep 27, 2020 · The Visual Geometry Group (VGG) at Oxford has built three models — VGG-16, ResNet-50, and SeNet-50 trained for face recognition as well as for face classification. For face embedding and identification, we use the FaceNet model, which has demonstrated outstanding performance in recent research. Nhắc lại bài toán Face Recognition. 07. The review questioned that we "only" used VGGFace2 as classification task. 3. py --image images/soccer_ball. Download the dataset here. input, out) # After this point you can use your To download VGGFace2 dataset, see authors' site. VGG & VGG2: These two face recognition datasets contain color face images of celebrities collected from the web. Downloads last month. The suggested method uses a Apart form these public datasets, Facebook and Google have large in-house datasets. use cufs or cufsf as the reference style. This page contains the download links for the source code for computing the VGG-Face CNN descriptor, described in [1]. No. Mar 10, 2021 · The VGG-Face was originally trained for face identification tasks with the VGGFace dataset (including 2,622 identities in total, with 2,271 downloadable identities). face = pixels[y1:y2, x1:x2] We can then use the PIL library to resize this small image of the face to the required size; specifically, the model expects square input faces with the shape 224×224. Although the period was very fruitful with contributions in the Face Recognition area, VGGFace presented novelties that enabled a large number of citations and worldwide recognition. We suggested a Convolutional Neural Network (CNN) method for this system's real-time recognition and identification of many faces. The images in this dataset cover large pose variations and background clutter. Citation Please cite the following if you make use of the dataset. Finally, we performed two experiments on two unconstrained datasets and reported our results using Rank-based metrics. Vol. from publication: Improving Accuracy of Face Recognition in the Era of Mask-Wearing: An Evaluation of a Pareto-Optimized FaceNet Jul 21, 2021 · Flickr Faces: This high-quality image dataset features 70,000 high-quality PNG images at 1024×1024 resolution with considerable variation/diversity in terms of age, race, background, ethnicity, and more. Learn more, including about available controls: Mar 25, 2021 · By SuNT 25 March 2021. , face detection, age estimation, age Keywords: python facial recognition, facial verification, deep learning facial recognition, facial embeddings, facial comparison, VGGFace. To associate your repository with the vggface2-dataset topic, visit your repo's landing page and select "manage topics. [X1, X2], np. Unlike the above datasets which are geared towards image-based face recognition, the Youtube Face (YTF) [23] and UMDFaces-Videos [4] datasets aim to recognise faces in unconstrained videos. Input the cropped face (s) into the embeddings generator, get the output embedding vector. OpenFace. For each identity, 4 different sets of accessories are sampled and 18 images are rendered for each set. We can add one more layer or retrain the last layer to extract the main features of our image. You signed in with another tab or window. Since the UMDFaces dataset does not specify training and validation sets, by default we select two images from every subject for validation. The structure of the VGG-Face model is demonstrated below. Script to download and annotate images from VGG Faces dataset - nddbk/vgg-faces-utils file into repo folder tar -zxvf vgg_face_dataset. The VGG-Face CNN descriptors are computed using our CNN implementation based on the VGG-Very-Deep-16 CNN architecture as described in [1] and are evaluated on the Labeled Faces in the Wild [2] and the YouTube Faces [3] dataset. But, the earlier version of this dataset had not been completely curated by human annotators and contained label noise. Face Recognition là bài toán nhận diện người dựa vào khuôn mặt của họ trong hình ảnh hoặc video. The download link stoped to work today 14. The face recognition model by Google [19] was trained using 200 million images of 8 million identities. emoji_events. It has 2. Supported Tasks and Leaderboards keyword-spotting: the dataset can be used to train and evaluate keyword spotting systems. 6 million images of 2,622 people. Visual Geometry Group, Department of Engineering Science, Uni "Deep Face Recognition. 02 of the data set (configuration "v0. CelebA has large diversities, large quantities, and rich annotations, including - 10,177 number of identities Aug 26, 2020 · 5)Training our layers again to build the vgg model using transfer learning. Reload to refresh your session. 1 A full face tracking example can be found at examples/face_tracking. Is VGGFace2 still a widely used face dataset in 2021? Research Publication. The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. For each image, face detection and estimated 5 keypoints are provided. Face Images – 1. By clicking or navigating, you agree to allow our usage of cookies. In this notebook we will be implementing one of the VGG model variants. Static Face Images for all the identities in VoxCeleb1 can be found in the VGGFace dataset. Unexpected token < in JSON at position 4. Feb 23, 2022 · We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3. Dataset (common) means it is a subset of the dataset. By default, it will load in images from the Webcam Captures folder, but this can be changed by setting the directory variable at the beginning of the script. 但是pytorch没有针对vggface数据集训练的vggface的预训练模型,你可以在官网的下载处看到提供的如下几种格式:. ipynb. Note: To avoid confusion between the VGG-16 Deep Download scientific diagram | VGG-360 Face dataset: face images of three identities. Documentation Here is a documentation that explains the preprocessing steps as well as the format of the pretrained weights. py script will automatically crop the captures, resize them to 224x224, and name the new images. If its use is not possible, the dataset and annotations are also available for download over http as two separate files: images. Model evaluated on CUFS. Refresh. VGG face dataset downloader Raw. There is no overlap between the two versions. It is easier to download from this torrent site. Arguments. Use in Datasets library. Jan 5, 2022 · This distinguished paper, 2015, Deep Face Recognition proposed a novel solution to this. As the current maintainers of this site, Facebook’s Cookies Policy applies. New Model. gz: Face detection and VGG We use this manually-labelled dataset to train three facial gender classifiers, a custom-designed network, and two pre-trained networks based on the Visual Geometry Group designs (VGG16) and Download full-text. VGG-Face2 Mivia Ethnicity Recognition (VMER) Dataset. Contribute to lijian8/vgg_face_dataset_download development by creating an account on GitHub. vggFace_downloader. 4 - VGG. We’re on a journey to advance and democratize artificial intelligence through open source and Version 0. Model evaluated on CUFSF. Research paper denotes the layer structre as shown below. Jul 25, 2022 · Description – Digi-Face 1M is the largest scale synthetic dataset for face recognition that is free from privacy violations and lack of consent. MegaFace Dataset. 2019, near to 00:00. VGG is not a single model, but a family of models that are all similar It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, FaceNet, OpenFace, DeepFace, DeepID, ArcFace, Dlib, SFace and GhostFaceNet. Edit dataset card. They experiment with 6 models, with different numbers of trainable layers. in addition, we perform ear recognition using transfer learning with CNN pretrained on image and face recognition. From here, we load our specific dataset and its classes, and have our training commence from learning the prior weights of ImageNet. As shown in Figure 1B, to test the performance of the VGG-Face on three races, 300 different identities were selected from another face dataset, VGGFace2 (Cao et al. It can use the following pre-trained models: VGG-Face (default) Google FaceNet. 31 million images of 9131 subjects (identities), with an average of 362. " GitHub is where people build software. Pretrain refers whether the model was pretrained on YouTube-8M dataset. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. 10: extra VGG-Face in training. Anto Satriyo Nugroho. The dataset contains 3. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects. Experiments show that human beings have 97. Then, you will be able to explore them in the Dataset Viewer. Here, we will present a paper overview and provide a code in PyTorch to VGG face dataset downloader. engine import Model from keras. Parkhi and Andrew Zisserman. Cao, Qiong, et al. View. The images need to be cropped into 'train' and 'val' folders. Face Data of 31 different classes. Thus, there is large variation in pose, lighting, expression, scene, camera, imaging New Dataset. Download scientific diagram | Samples from NUAA dataset [8] from publication: Face Spoof Detection Using VGG-Face Architecture | Face recognition systems have been Mar 19, 2024 · Another large scale dataset targeted at recognition is the VGG Face dataset . If the issue persists, it's likely a problem on our side. I have collected images of top 5 most powerful leaders in the world Donald Trump, Vladimir Putin, Xi Jinping, Angela python script for downloading vgg face dataset. --vgg_select_num 0 --train_style cufsf. train_style [cufs, cufsf]. GitHub Gist: instantly share code, notes, and snippets. The dataset was collected with three goals in Hi, I was looking for face recognition datasets. " So, please, help me too ) These are the pretrained weights for this VGG implementation in Jax/Flax. III. Download scientific diagram | VGG-FACE dataset images samples. vgg_face_matconvnet. 02") was released on April 11th 2018 and contains 105,829 audio files. Pretrained weights for facenet-pytorch package. The copyright remains with the original owners of the video. Get dataset/images of persons. Download the UMDFaces dataset (the 3 batches of still images), which contains 367,888 face annotations for 8,277 subjects, split into 3 batches. . Acquire vgg-face dataset completely, with pre and post processing and a sample dataset loader script. applications. You switched accounts on another tab or window. Today browser shows "The requested URL /vgg_face2/get_file was not found on this server. Feb 23, 2022 · Download full-text PDF Read full-text. The task is to detect preregistered keywords by classifying utterances into a predefined set of words. New Competition. This subset only contains data of common classes ( listed here) between AudioSet and VGGSound. Face Images with Marked Landmark Points: This free image dataset for facial recognition contains 7049 images with up to 15 keypoints marking Overview; LogicalDevice; LogicalDeviceConfiguration; PhysicalDevice; experimental_connect_to_cluster; experimental_connect_to_host; experimental_functions_run_eagerly Nov 16, 2023 · The FIW dataset is a massive collection where face photos are organized by person and then grouped by family. VGG is a neural network model that uses convolutional neural network (CNN) layers and was designed for the ImageNet challenge, which it won in 2014. We provide loosely-cropped faces for each identity, and meta information for each identity and each face image in the dataset. 2015. g. Upload or create new data files. ) provided on the HuggingFace Datasets Hub. The download links for the VGGFace2 dataset are no longer available from this website. If you wish to request access to dataset please follow instructions on challenge page. Each version has it's own train/test split. "Vggface2: A dataset for recognising faces across pose Aug 6, 2018 · VGG-Face is deeper than Facebook’s Deep Face, it has 22 layers and 37 deep units. vgg16. py Apr 10, 2018 · The dataset contains 3. No Active Events. Because the images are subject to copyright and VGG does… Download scientific diagram | Sample images from the VGGFace2 dataset. VGG-Face model. Download. The dataset consists on a new set of annotations for the recently-introduced VGG-Sound dataset, where the sound sources visible in each video clip are explicitly marked with bounding box annotations. To associate your repository with the vgg-face topic, visit your repo's landing page and select "manage topics. Hai trong số các bài toán của Face Recognition là: Face Verification: Ánh xạ 1-1, giữa khuôn mặt của một người đưa vào hệ Here images are sorted to corresponding test and train folders of same person Directory structure : |Images / | |-- (60 images) |Images_crop / | |--angelamerkel Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources Download. tar. keyboard_arrow_down. The VGGFace2 dataset was supposed to be one of the largest publicly available ones, with many sota methods being trained on it. jpg --model vgg16 . It is essentially a wrapper for state-of-the-art models trained to recognize faces. For VGG16, call keras. array(labels) # Train Model file_path = "vgg_face. These datasets contain only a few hundreds of images and have limited variations in face appearance. Licensing – The Digi-Face 1M dataset is available for non-commercial research purposes only. The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. # extract the face. Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects. There are also live events, courses curated by job role, and more. , 2018). Last, but not least, those authors conducted a cross-dataset evaluation to The Dataset_Image_Crop. gz -C vgg-faces-utils Dec 23, 2019 · In “crop_face” function we will going to detect face using MTCNN and then going to crop face out using Numpy image slicing on line 6. Now let see how our model going to perform. Please contact the authors below if you have any queries regarding the dataset. Notably, we will have to update our network's final layers to be aware that we have fewer classes now than ImageNet's 2000! The training for this step can vary in time. h5 6)Loading our model and using it to recognize faces: Sep 14, 2020 · VGG-16, VGG-Face, ResNet-50, and MobileNet v2 were some of the prominent deep network architectures analyzed in [10]. 3 millions face images, with an average of about 362 samples per subject (minimum 87 images per subject). Static Face Images for all the identities in VoxCeleb2 can be found in the VGGFace2 dataset . layers import Input from keras_vggface. 3 million images. If the real age estimation research spans over decades, the study of apparent age estimation or the age as Jun 28, 2020 · Transfer Learning Using VGG16. Step 1: Dec 7, 2021 · 首先,vggface是基于vgg16架构的,pytorch本身也提供了vgg16等预训练模型(categories是imagenet_classes),见 VGG-NETS 。. VGG-Face network described in section 3. 0 International License. We can also give the weight of VGG16 and train again, instead of using We detail the audio classfication results here. Only output layer is different than the imagenet version – you might compare. 1. content_copy. Don't forget to seed back for a while if you can! VGGface-preprocess-download-postprocess. vggface import VGGFace # Layer Features layer_name = 'layer_name' # edit this line vgg_model = VGGFace # pooling: None, avg or max out = vgg_model. E. Add this topic to your repo. actors, athletes, politicians). The program works as follow: Detect face (s) in the input image and crop out the face (s) only. When I have downloaded about 32 out of 36 GB - this kind of troubles always happens with me :). The saveDirectory and name variables should be changed for each new The UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The images are available with large variation of poses and ages for both datasets. From there, let’s try classifying an image with VGG16: $ python classify_image. I first thought he was not happy with face tasks, but most papers in this field used face classification as their only example. The VMER dataset is composed by images collected from the original VGGFace2, which is so far the largest face dataset in the world including more than 3. preprocess_input on your inputs before passing them to the model. Feb 7, 2015 · You signed in with another tab or window. from publication: Facial Expression Recognition Based on Weighted-Cluster Loss and Deep Transfer Learning Using a Highly No Active Events. PyTorch Implementation of a Multi-Output Neural Network on the UTK Face Dataset to predict a person's age (range), ethnicity, and gender. vgg_face_dataset_download is a Python library typically used in Artificial Intelligence, Computer Vision applications. preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. Previously this page linked to models trained on VGG Face 2 (citation below). Hi. Try V7 now. one-line dataloaders for many public datasets: one-liners to download and pre-process any of the major public datasets (image datasets, audio datasets, text datasets in 467 languages and dialects, etc. This dataset could be used on a variety of tasks, e. Oct 31, 2023 · The PASCAL FACE dataset is a dataset for face detection and face recognition. Now let see from keras. The text was updated successfully, but these errors were encountered: 👍 66 sarim-zafar, Raj800, evantkchong, yeoyi519, care1e55, oleksii-udod, MarkSCQ, tanreinama, xcliang2, philwhln, and 56 more reacted with thumbs up emoji The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4. 31 million images of 9131 subjects, with an average of 362. Jun 4, 2019 · x2, y2 = x1 + width, y1 + height. ASTest is the intersection of AudioSet and VGGSound testsets. 53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level. Finetuning pretrained models with new data In most situations, the best way to implement face recognition is to use the pretrained models directly, with either a clustering algorithm or a simple distance metrics to determine the identity of a face. The models in the paper are trained under 3 configurations:--vgg_select_num 0 --train_style cufs. A complete version of the license can be found here. <a href=https://viakeshpija.ch/ahdxzbp/bilang-estudyante.html>eh</a> <a href=https://viakeshpija.ch/ahdxzbp/nachrichten-aus-israel-von-ludwig-schneider.html>nn</a> <a href=https://viakeshpija.ch/ahdxzbp/basic-tv-apk.html>sx</a> <a href=https://viakeshpija.ch/ahdxzbp/marketing-society-scotland-amplify.html>zt</a> <a href=https://viakeshpija.ch/ahdxzbp/wife-hypnotized-to-fuck-stories.html>yn</a> <a href=https://viakeshpija.ch/ahdxzbp/python-run-onnx-model.html>sl</a> <a href=https://viakeshpija.ch/ahdxzbp/universal-extractor.html>rw</a> <a href=https://viakeshpija.ch/ahdxzbp/sample-project-proposal-for-building-construction.html>jn</a> <a href=https://viakeshpija.ch/ahdxzbp/w222-amg-menu-activation.html>hp</a> <a href=https://viakeshpija.ch/ahdxzbp/azure-cli-download-for-windows.html>sm</a> </b></p> </div> </div> </body> </html>
/home/sudancam/public_html/0d544/../ph/.././../www/soon/../un6xee/index/vgg-face-dataset-download.php