CINXE.COM

Search results for: candid clip

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: candid clip</title> <meta name="description" content="Search results for: candid clip"> <meta name="keywords" content="candid clip"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="candid clip" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="candid clip"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 34</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: candid clip</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">34</span> Feedback of Using Set-Up Candid Clips as New Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Miss%20Suparada%20Prapawong">Miss Suparada Prapawong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives were to analyze the using of new media in the form of set up candid clip that affects the product and presenter, to study the effectiveness of using new media in the form of set up candid clip in order to increase the circulation and audience satisfaction and to use the earned information and knowledge to develop the communication for publicizing and advertising via new media. This research is qualitative research based on questionnaire and in-depth interview from experts. The findings showed the advantages and disadvantages of communication for publicizing and advertising via new media in the form of set up candid clip including with the specific target group for this kind of advertising. It will be useful for fields of publicizing and advertising in the new media forms at the present. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=candid%20clip" title="candid clip">candid clip</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20media" title=" new media"> new media</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network" title=" social network "> social network </a> </p> <a href="https://publications.waset.org/abstracts/10109/feedback-of-using-set-up-candid-clips-as-new-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">308</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">33</span> Set Up Candid Clips Effectiveness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Suparada">P. Suparada</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Eakapotch"> D. Eakapotch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives were to analyze the using of new media in the form of set up candid clip that affects the product and presenter, to study the effectiveness of using new media in the form of set up candid clip in order to increase the circulation and audience satisfaction and to use the earned information and knowledge to develop the communication for publicizing and advertising via new media. This research is qualitative research based on questionnaire and in-depth interview from experts. The findings showed the advantages and disadvantages of communication for publicizing and advertising via new media in the form of set up candid clip including with the specific target group for this kind of advertising. It will be useful for fields of publicizing and advertising in the new media forms at the present. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=candid%20clip" title="candid clip">candid clip</a>, <a href="https://publications.waset.org/abstracts/search?q=communication" title=" communication"> communication</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20media" title=" new media"> new media</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network" title=" social network"> social network</a> </p> <a href="https://publications.waset.org/abstracts/8209/set-up-candid-clips-effectiveness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8209.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">245</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">32</span> Using Set Up Candid Clips as Viral Marketing via New Media</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Suparada">P. Suparada</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Eakapotch"> D. Eakapotch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research’s objectives were to analyze the using of new media in the form of set up candid clip that affects the product and presenter, to study the effectiveness of using new media in the form of set up candid clip in order to increase the circulation and audience satisfaction and to use the earned information and knowledge to develop the communication for publicizing and advertising via new media. This research is qualitative research based on questionnaire from 50 random sampling representative samples and in-depth interview from experts in publicizing and advertising fields. The findings indicated the positive and negative effects to the brands’ image and presenters’ image of product named “Scotch 100” and “Snickers” that used set up candid clips via new media for publicizing and advertising in Thailand. It will be useful for fields of publicizing and advertising in the new media forms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=candid%20clip" title="candid clip">candid clip</a>, <a href="https://publications.waset.org/abstracts/search?q=effect" title=" effect"> effect</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20media" title=" new media"> new media</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20network" title=" social network "> social network </a> </p> <a href="https://publications.waset.org/abstracts/11114/using-set-up-candid-clips-as-viral-marketing-via-new-media" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11114.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">223</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">31</span> Useful Lifetime Prediction of Rail Pads for High Speed Trains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chang%20Su%20Woo">Chang Su Woo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyun%20Sung%20Park"> Hyun Sung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Useful lifetime evaluations of rail-pads were very important in design procedure to assure the safety and reliability. It is, therefore, necessary to establish a suitable criterion for the replacement period of rail pads. In this study, we performed properties and accelerated heat aging tests of rail pads considering degradation factors and all environmental conditions including operation, and then derived a lifetime prediction equation according to changes in hardness, thickness, and static spring constants in the Arrhenius plot to establish how to estimate the aging of rail pads. With the useful lifetime prediction equation, the lifetime of e-clip pads was 2.5 years when the change in hardness was 10% at 25°C; and that of f-clip pads was 1.7 years. When the change in thickness was 10%, the lifetime of e-clip pads and f-clip pads is 2.6 years respectively. The results obtained in this study to estimate the useful lifetime of rail pads for high speed trains can be used for determining the maintenance and replacement schedule for rail pads. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rail%20pads" title="rail pads">rail pads</a>, <a href="https://publications.waset.org/abstracts/search?q=accelerated%20test" title=" accelerated test"> accelerated test</a>, <a href="https://publications.waset.org/abstracts/search?q=Arrhenius%20plot" title=" Arrhenius plot"> Arrhenius plot</a>, <a href="https://publications.waset.org/abstracts/search?q=useful%20lifetime%20prediction" title=" useful lifetime prediction"> useful lifetime prediction</a>, <a href="https://publications.waset.org/abstracts/search?q=mechanical%20engineering%20design" title=" mechanical engineering design"> mechanical engineering design</a> </p> <a href="https://publications.waset.org/abstracts/3182/useful-lifetime-prediction-of-rail-pads-for-high-speed-trains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">326</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">30</span> Antihypertensive Activity of Alcoholic Extract of Citrus Paradise Juice in One Clip One Kidney Hypertension Model in Rats</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lokesh%20Bhatt">Lokesh Bhatt</a>, <a href="https://publications.waset.org/abstracts/search?q=Jayesh%20Rathod"> Jayesh Rathod</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hypertension is one of the most prevalent cardiovascular disorder. It is responsible for several other cardiovascular disorders. Although many drugs are available for the treatment of hypertension, still a large population has uncontrolled blood pressure. Thus there is an unmet need for new therapeutic approaches for the same. Fruit juice of Citrus paradise contains several flavonoids with vasodilatory activity. We hypothesized that alcoholic extract of Citrus paradise, which contains flavonoids, might attenuate hypertension. The objective of the present study was to evaluate the antihypertensive activity of alcoholic extract of Citrus paradise fruit juice in rats. Hypertension was induced using one clip one kidney model in rats. The renal artery was occluded for 4 h after removal of one kidney. Once stabilized, the ganglionic blockade was performed followed by removal of the arterial clip from the kidney. Removal of clip resulted in an increase in blood pressure which is due to release of renin from the kidney. Alcoholic extract of Citrus paradise fruit juice was then administered at 50 mg/kg and 100 mg/kg dose by intravenous injection. Blood pressure was monitored continuously. Alcoholic extract of Citrus paradise fruit juice reduced hypertension in dose-dependent manner. Antihypertensive activity was found to be associated with vasodilation. The results of the present study showed antihypertensive potential of alcoholic extract of Citrus paradise fruit juice. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=citrus%20paradise" title="citrus paradise">citrus paradise</a>, <a href="https://publications.waset.org/abstracts/search?q=alcoholic%20extract" title=" alcoholic extract"> alcoholic extract</a>, <a href="https://publications.waset.org/abstracts/search?q=one%20clip%20one%20kidney%20model" title=" one clip one kidney model"> one clip one kidney model</a>, <a href="https://publications.waset.org/abstracts/search?q=vasodilation" title=" vasodilation"> vasodilation</a> </p> <a href="https://publications.waset.org/abstracts/67780/antihypertensive-activity-of-alcoholic-extract-of-citrus-paradise-juice-in-one-clip-one-kidney-hypertension-model-in-rats" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67780.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">29</span> Image Captioning with Vision-Language Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Promise%20Ekpo%20Osaine">Promise Ekpo Osaine</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Melesse"> Daniel Melesse</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-modal%20AI%20systems" title="multi-modal AI systems">multi-modal AI systems</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20captioning" title=" image captioning"> image captioning</a>, <a href="https://publications.waset.org/abstracts/search?q=encoder" title=" encoder"> encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=decoder" title=" decoder"> decoder</a>, <a href="https://publications.waset.org/abstracts/search?q=BLUE%20score" title=" BLUE score"> BLUE score</a> </p> <a href="https://publications.waset.org/abstracts/181849/image-captioning-with-vision-language-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/181849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">28</span> Axillary Evaluation with Targeted Axillary Dissection Using Ultrasound-Visible Clips after Neoadjuvant Chemotherapy for Patients with Node-Positive Breast Cancer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Naomi%20Sakamoto">Naomi Sakamoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Eisuke%20Fukuma"> Eisuke Fukuma</a>, <a href="https://publications.waset.org/abstracts/search?q=Mika%20Nashimoto"> Mika Nashimoto</a>, <a href="https://publications.waset.org/abstracts/search?q=Yoshitomo%20Koshida"> Yoshitomo Koshida</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Selective localization of the metastatic lymph node with clip and removal of clipped nodes with sentinel lymph node (SLN), known as targeted axillary dissection (TAD), reduced false-negative rates (FNR) of SLN biopsy (SLNB) after neoadjuvant chemotherapy (NAC). For the patients who achieved nodal pathologic complete response (pCR), accurate staging of axilla by TAD lead to omit axillary lymph node dissection (ALND), decreasing postoperative arm morbidity without a negative effect on overall survival. This study aimed to investigate the ultrasound (US) identification rate and success removal rate of two kinds of ultrasound-visible clips placed in metastatic lymph nodes during TAD procedure. Methods: This prospective study was conducted using patients with clinically T1-3, N1, 2, M0 breast cancer undergoing NAC followed by surgery. A US-visible clip was placed in the suspicious lymph node under US guidance before neoadjuvant chemotherapy. Before surgery, US examination was performed to evaluate the detection rate of clipped node. During the surgery, the clipped node was removed using several localization techniques, including hook-wire localization, dye-injection, or fluorescence technique, followed by a dual-technique SLNB and resection of palpable nodes if present. For the fluorescence technique, after injection of 0.1-0.2 mL of indocyanine green dye (ICG) into the clipped node, ICG fluorescent imaging was performed using the Photodynamic Eye infrared camera (Hamamatsu Photonics k. k., Shizuoka, Japan). For the dye injection method, 0.1-0.2 mL of pyoktanin blue dye was injected into the clipped node. Results: A total of 29 patients were enrolled. Hydromark™ breast biopsy site markers (Hydromark, T3 shape; Devicor Medical Japan, Tokyo, Japan) was used in 15patients, whereas a UltraCor™ Twirl™ breast marker (Twirl; C.R. Bard, Inc, NJ, USA) was placed in 14 patients. US identified the clipped node marked with the UltraCore Twirl in 100% (14/14) and with the Hydromark in 93.3% (14/15, p = ns). Success removal of clipped node marked with the UltraCore Twirl was achieved in 100% (14/14), whereas the node marked with the Hydromark was removed in 80% (12/15) (p = ns). Conclusions: The ultrasound identification rate differed between the two types of ultrasound-visible clips, which also affected the success removal rate of clipped nodes. Labelling the positive node with a US-highly-visible clip allowed successful TAD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer" title="breast cancer">breast cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=neoadjuvant%20chemotherapy" title=" neoadjuvant chemotherapy"> neoadjuvant chemotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=targeted%20axillary%20dissection" title=" targeted axillary dissection"> targeted axillary dissection</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20tissue%20marker" title=" breast tissue marker"> breast tissue marker</a>, <a href="https://publications.waset.org/abstracts/search?q=clip" title=" clip"> clip</a> </p> <a href="https://publications.waset.org/abstracts/177380/axillary-evaluation-with-targeted-axillary-dissection-using-ultrasound-visible-clips-after-neoadjuvant-chemotherapy-for-patients-with-node-positive-breast-cancer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177380.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">27</span> [Keynote Talk]: The Intoxicated Eyewitness: Effect of Alcohol Consumption on Identification Accuracy in Lineup</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vikas%20S.%20Minchekar">Vikas S. Minchekar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The eyewitness is a crucial source of evidence in the criminal judicial system. However, rely on the reminiscence of an eyewitness especially intoxicated eyewitness is not always judicious. It might lead to some serious consequences. Day by day, alcohol-related crimes or the criminal incidences in bars, nightclubs, and restaurants are increasing rapidly. Tackling such cases is very complicated to any investigation officers. The people in that incidents are violated due to the alcohol consumption hence, their ability to identify the suspects or recall these phenomena is affected. The studies on the effects of alcohol consumption on motor activities such as driving and surgeries have received much attention. However, the effect of alcohol intoxication on memory has received little attention from the psychology, law, forensic and criminology scholars across the world. In the Indian context, the published articles on this issue are equal to none up to present day. This field experiment investigation aimed at to finding out the effect of alcohol consumption on identification accuracy in lineups. Forty adult, social drinkers, and twenty sober adults were randomly recruited for the study. The sober adults were assigned into 'placebo' beverage group while social drinkers were divided into two group e. g. 'low dose' of alcohol (0.2 g/kg) and 'high dose' of alcohol (0.8 g/kg). The social drinkers were divided in such a way that their level of blood-alcohol concentration (BAC) will become different. After administering the beverages for the placebo group and liquor to the social drinkers for 40 to 50 minutes of the period, the five-minute video clip of mock crime is shown to all in a group of four to five members. After the exposure of video, clip subjects were given 10 portraits and asked them to recognize whether they are involved in mock crime or not. Moreover, they were also asked to describe the incident. The subjects were given two opportunities to recognize the portraits and to describe the events; the first opportunity is given immediately after the video clip and the second was 24 hours later. The obtained data were analyzed by one-way ANOVA and Scheffe’s posthoc multiple comparison tests. The results indicated that the 'high dose' group is remarkably different from the 'placebo' and 'low dose' groups. But, the 'placebo' and 'low dose' groups are equally performed. The subjects in a 'high dose' group recognized only 20% faces correctly while the subjects in a 'placebo' and 'low dose' groups are recognized 90 %. This study implied that the intoxicated witnesses are less accurate to recognize the suspects and also less capable of describing the incidents where crime has taken place. Moreover, this study does not assert that intoxicated eyewitness is generally less trustworthy than their sober counterparts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intoxicated%20eyewitness" title="intoxicated eyewitness">intoxicated eyewitness</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20drinkers" title=" social drinkers"> social drinkers</a>, <a href="https://publications.waset.org/abstracts/search?q=lineups" title=" lineups"> lineups</a> </p> <a href="https://publications.waset.org/abstracts/61407/keynote-talk-the-intoxicated-eyewitness-effect-of-alcohol-consumption-on-identification-accuracy-in-lineup" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">26</span> Mental Wellbeing Using Music Intervention: A Case Study of Therapeutic Role of Music, From Both Psychological and Neurocognitive Perspectives</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Medha%20Basu">Medha Basu</a>, <a href="https://publications.waset.org/abstracts/search?q=Kumardeb%20Banerjee"> Kumardeb Banerjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Dipak%20Ghosh"> Dipak Ghosh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> After the massive blow of the COVID-19 pandemic, several health hazards have been reported all over the world. Serious cases of Major Depressive Disorder (MDD) are seen to be common in about 15% of the global population, making depression one of the leading mental health diseases, as reported by the World Health Organization. Various psychological and pharmacological treatment techniques are regularly being reported. Music, a globally accepted mode of entertainment, is often used as a therapeutic measure to treat various health conditions. We have tried to understand how Indian Classical Music can affect the overall well-being of the human brain. A case study has been reported here, where a Flute-rendition has been chosen from a detailed audience response survey, and the effects of that clip on human brain conditions have been studied from both psychological and neural perspectives. Taking help from internationally-accepted depression-rating scales, two questionnaires have been designed to understand both the prolonged and immediate effect of music on various emotional states of human lives. Thereafter, from EEG experiments on 5 participants using the same clip, the parameter ‘ALAY’, alpha frontal asymmetry (alpha power difference of right and left frontal hemispheres), has been calculated. Works of Richard Davidson show that an increase in the ‘ALAY’ value indicates a decrease in depressive symptoms. Using the non-linear technique of MFDFA on EEG analysis, we have also calculated frontal asymmetry using the complexity values of alpha-waves in both hemispheres. The results show a positive correlation between both the psychological survey and the EEG findings, revealing the prominent role of music on the human brain, leading to a decrease in mental unrest and an increase in overall well-being. In this study, we plan to propose the scientific foundation of music therapy, especially from a neurocognition perspective, with appropriate neural bio-markers to understand the positive and remedial effects of music on the human brain. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=music%20therapy" title="music therapy">music therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a>, <a href="https://publications.waset.org/abstracts/search?q=psychological%20survey" title=" psychological survey"> psychological survey</a>, <a href="https://publications.waset.org/abstracts/search?q=frontal%20alpha%20asymmetry" title=" frontal alpha asymmetry"> frontal alpha asymmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=wellbeing" title=" wellbeing"> wellbeing</a> </p> <a href="https://publications.waset.org/abstracts/186690/mental-wellbeing-using-music-intervention-a-case-study-of-therapeutic-role-of-music-from-both-psychological-and-neurocognitive-perspectives" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186690.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">41</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">25</span> A Method of Detecting the Difference in Two States of Brain Using Statistical Analysis of EEG Raw Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Digvijaysingh%20S.%20Bana">Digvijaysingh S. Bana</a>, <a href="https://publications.waset.org/abstracts/search?q=Kiran%20R.%20Trivedi"> Kiran R. Trivedi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces various methods for the alpha wave to detect the difference between two states of brain. One healthy subject participated in the experiment. EEG was measured on the forehead above the eye (FP1 Position) with reference and ground electrode are on the ear clip. The data samples are obtained in the form of EEG raw data. The time duration of reading is of one minute. Various test are being performed on the alpha band EEG raw data.The readings are performed in different time duration of the entire day. The statistical analysis is being carried out on the EEG sample data in the form of various tests. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electroencephalogram%28EEG%29" title="electroencephalogram(EEG)">electroencephalogram(EEG)</a>, <a href="https://publications.waset.org/abstracts/search?q=biometrics" title=" biometrics"> biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=authentication" title=" authentication"> authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG%20raw%20data" title=" EEG raw data"> EEG raw data</a> </p> <a href="https://publications.waset.org/abstracts/32552/a-method-of-detecting-the-difference-in-two-states-of-brain-using-statistical-analysis-of-eeg-raw-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32552.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">24</span> Control of Lymphatic Remodelling by miR-132</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Valeria%20Arcucci">Valeria Arcucci</a>, <a href="https://publications.waset.org/abstracts/search?q=Musarat%20Ishaq"> Musarat Ishaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Steven%20A.%20Stacker"> Steven A. Stacker</a>, <a href="https://publications.waset.org/abstracts/search?q=Greg%20J.%20Goodall"> Greg J. Goodall</a>, <a href="https://publications.waset.org/abstracts/search?q=Marc%20G.%20Achen"> Marc G. Achen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Metastasis is the lethal aspect of cancer for most patients. Remodelling of lymphatic vessels associated with a tumour is a key initial step in metastasis because it facilitates the entry of cancer cells into the lymphatic vasculature and their spread to lymph nodes and distant organs. Although it is clear that vascular endothelial growth factors (VEGFs), such as VEGF-C and VEGF-D, are key drivers of lymphatic remodelling, the means by which many signaling pathways in endothelial cells are coordinately regulated to drive growth and remodelling of lymphatics in cancer is not understood. We seek to understand the broader molecular mechanisms that control cancer metastasis, and are focusing on microRNAs, which coordinately regulate signaling pathways involved in complex biological responses in health and disease. Here, using small RNA sequencing, we found that a specific microRNA, miR-132, is upregulated in expression in lymphatic endothelial cells (LECs) in response to the lymphangiogenic growth factors. Interestingly, ectopic expression of miR-132 in LECs in vitro stimulated proliferation and tube formation of these cells. Moreover, miR-132 is expressed in lymphatic vessels of a subset of human breast tumours which were previously found to express high levels of VEGF-D by immunohistochemical analysis on tumour tissue microarrays. In order to dissect the complexity of regulation by miR-132 in lymphatic biology, we performed Argonaute HITS-CLIP, which led us to identify the miR-132-mRNA interactome in LECs. We found that this microRNA in LECs is involved in the control of many different pathways mainly involved in cell proliferation and regulation of the extracellular matrix and cell-cell junctions. We are now exploring the functional significance of miR-132 targets in the biology of LECs using biochemical techniques, functional in vitro cell assays and in vivo lymphangiogenesis assays. This project will ultimately define the molecular regulation of lymphatic remodelling by miR-132, and thereby identify potential therapeutic targets for drugs designed to restrict the growth and remodelling of tumour lymphatics resulting in metastatic spread. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=argonaute%20HITS-CLIP" title="argonaute HITS-CLIP">argonaute HITS-CLIP</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer" title=" cancer"> cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=lymphatic%20remodelling" title=" lymphatic remodelling"> lymphatic remodelling</a>, <a href="https://publications.waset.org/abstracts/search?q=miR-132" title=" miR-132"> miR-132</a>, <a href="https://publications.waset.org/abstracts/search?q=VEGF" title=" VEGF"> VEGF</a> </p> <a href="https://publications.waset.org/abstracts/110680/control-of-lymphatic-remodelling-by-mir-132" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110680.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">128</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">23</span> Patient-Friendly Hand Gesture Recognition Using AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Prabhu">K. Prabhu</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Dinesh"> K. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ranjani"> M. Ranjani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Suhitha"> M. Suhitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nodeMCU" title="nodeMCU">nodeMCU</a>, <a href="https://publications.waset.org/abstracts/search?q=AI%20technology" title=" AI technology"> AI technology</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=patient" title=" patient"> patient</a> </p> <a href="https://publications.waset.org/abstracts/144943/patient-friendly-hand-gesture-recognition-using-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">22</span> On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hala%20Zaghloul">Hala Zaghloul</a>, <a href="https://publications.waset.org/abstracts/search?q=Taymoor%20Nazmy"> Taymoor Nazmy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20system" title="cognitive system">cognitive system</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCNN%20kernels" title=" PCNN kernels"> PCNN kernels</a> </p> <a href="https://publications.waset.org/abstracts/53579/on-the-implementation-of-the-pulse-coupled-neural-network-pcnn-in-the-vision-of-cognitive-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53579.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">21</span> Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyi%20Wang">Keyi Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/132854/static-and-dynamic-hand-gesture-recognition-using-convolutional-neural-network-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">20</span> Design and Development of Automatic Onion Harvester</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Revathi">P. Revathi</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Mrunalini"> T. Mrunalini</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Padma%20Priya"> K. Padma Priya</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ramya"> P. Ramya</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Saranya"> R. Saranya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the 5 gestures will be detected when shown with their hands via a webcam which is placed for gesture detection. A personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=onion%20harvesting" title="onion harvesting">onion harvesting</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20pluging" title=" automatic pluging"> automatic pluging</a>, <a href="https://publications.waset.org/abstracts/search?q=camera" title=" camera"> camera</a>, <a href="https://publications.waset.org/abstracts/search?q=raspberry%20pi" title=" raspberry pi"> raspberry pi</a> </p> <a href="https://publications.waset.org/abstracts/144945/design-and-development-of-automatic-onion-harvester" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144945.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">19</span> Candid Panchali&#039;s Unheard Womanhood: A Study of Chitra Divakurani&#039;s the Palace of Illusions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shalini%20Attri">Shalini Attri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Silence has been 'scriptured' in women within dominating social structures, as the modes of speaking and behaving which deny women free investiture to language. A woman becomes the product of ideological constructions as language substantiates andro-centric bias. Constrained from writing/speaking in the public sphere, women have traditionally been confined to expressing themselves in writing private poetry, letters or diaries. The helplessness of a woman is revealed in the ways in which she is expected to speak a language, which, in fact, is man-made. There are visible binaries of coloniser- colonised; Western-Eastern; White-Black, Nature-Culture, even Male-Female that contribute significantly to our understanding of the concept of representation and its resultant politics. Normally, an author is labeled as feminist, humanist, or propagandist and this process of labeling correspond to a sense of politics besides his inclination to a particular field. One cannot even think of contemporary literature without this representational politics. Thus, each and every bit of analysis of a work of literature demands a political angle to be dealt with. Besides literature, the historical facts and manuscripts are also subject to this politics. The image of woman as someone either dependent on man or is exploited by him only provides half the picture of this representational politics. The present paper is an attempt to study Panchali’s (Draupadi of Mahabharata) voiceless articulation and her representation as a strong woman in Chitra Divakurani’s The Palace of Illusions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=politics" title="politics">politics</a>, <a href="https://publications.waset.org/abstracts/search?q=representation" title=" representation"> representation</a>, <a href="https://publications.waset.org/abstracts/search?q=silence" title=" silence"> silence</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20structures" title=" social structures"> social structures</a> </p> <a href="https://publications.waset.org/abstracts/50620/candid-panchalis-unheard-womanhood-a-study-of-chitra-divakuranis-the-palace-of-illusions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">18</span> The Impact of Keyword and Full Video Captioning on Listening Comprehension</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elias%20Bensalem">Elias Bensalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates the effect of two types of captioning (full and keyword captioning) on listening comprehension. Thirty-six university-level EFL students participated in the study. They were randomly assigned to watch three video clips under three conditions. The first group watched the video clips with full captions. The second group watched the same video clips with keyword captions. The control group watched the video clips without captions. After watching each clip, participants took a listening comprehension test. At the end of the experiment, participants completed a questionnaire to measure their perceptions about the use of captions and the video clips they watched. Results indicated that the full captioning group significantly outperformed both the keyword captioning and the no captioning group on the listening comprehension tests. However, this study did not find any significant difference between the keyword captioning group and the no captioning group. Results of the survey suggest that keyword captioning were a source of distraction for participants. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=captions" title="captions">captions</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL" title=" EFL"> EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=listening%20comprehension" title=" listening comprehension"> listening comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=video" title=" video"> video</a> </p> <a href="https://publications.waset.org/abstracts/62467/the-impact-of-keyword-and-full-video-captioning-on-listening-comprehension" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">262</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">17</span> Chronic Hypertension, Aquaporin and Hydraulic Conductivity: A Perspective on Pathological Connections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chirag%20Raval">Chirag Raval</a>, <a href="https://publications.waset.org/abstracts/search?q=Jimmy%20Toussaint"> Jimmy Toussaint</a>, <a href="https://publications.waset.org/abstracts/search?q=Tieuvi%20Nguyen"> Tieuvi Nguyen</a>, <a href="https://publications.waset.org/abstracts/search?q=Hadi%20Fadaifard"> Hadi Fadaifard</a>, <a href="https://publications.waset.org/abstracts/search?q=George%20Wolberg"> George Wolberg</a>, <a href="https://publications.waset.org/abstracts/search?q=Steven%20Quarfordt"> Steven Quarfordt</a>, <a href="https://publications.waset.org/abstracts/search?q=Kung-ming%20Jan"> Kung-ming Jan</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20S.%20Rumschitzki"> David S. Rumschitzki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerous studies examine aquaporins’ role in osmotic water transport in various systems but virtually none focus on aquaporins’ role in hydrostatically-driven water transport involving mammalian cells save for our laboratory’s recent study of aortic endothelial cells. Here we investigate aquaporin-1 expression and function in the aortic endothelium in two high-renin rat models of hypertension, the spontaneously hypertensive genomically altered Wystar-Kyoto rat variant and Sprague-Dawley rats made hypertensive by two kidney, one clip Goldblatt surgery. We measured aquaporin-1 expression in aortic endothelial cells from whole rat aortas by quantitative immunohistochemistry, and function by measuring the pressure driven hydraulic conductivities of excised rat aortas with both intact and denuded endothelia on the same vessel. We use them to calculate the effective intimal hydraulic conductivity, which is a combination of endothelial and subendothelial components. We observed well-correlated enhancements in aquaporin-1 expression and function in both hypertensive rat models as well as in aortas from normotensive rats whose expression was upregulated by 2h forskolin treatment. Upregulated aquaporin-1 expression and function may be a response to hypertension that critically determines conduit artery vessel wall viability and long-term susceptibility to atherosclerosis. Numerous studies examine aquaporins’ role in osmotic water transport in various systems but virtually none focus on aquaporins’ role in hydrostatically-driven water transport involving mammalian cells save for our laboratory’s recent study of aortic endothelial cells. Here we investigate aquaporin-1 expression and function in the aortic endothelium in two high-renin rat models of hypertension, the spontaneously hypertensive genomically altered Wystar-Kyoto rat variant and Sprague-Dawley rats made hypertensive by two kidney, one clip Goldblatt surgery. We measured aquaporin-1 expression in aortic endothelial cells from whole rat aortas by quantitative immunohistochemistry, and function by measuring the pressure driven hydraulic conductivities of excised rat aortas with both intact and denuded endothelia on the same vessel. We use them to calculate the effective intimal hydraulic conductivity, which is a combination of endothelial and subendothelial components. We observed well-correlated enhancements in aquaporin-1 expression and function in both hypertensive rat models as well as in aortas from normotensive rats whose expression was upregulated by 2h forskolin treatment. Upregulated aquaporin-1 expression and function may be a response to hypertension that critically determines conduit artery vessel wall viability and long-term susceptibility to atherosclerosis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acute%20hypertension" title="acute hypertension">acute hypertension</a>, <a href="https://publications.waset.org/abstracts/search?q=aquaporin-1" title=" aquaporin-1"> aquaporin-1</a>, <a href="https://publications.waset.org/abstracts/search?q=hydraulic%20conductivity" title=" hydraulic conductivity"> hydraulic conductivity</a>, <a href="https://publications.waset.org/abstracts/search?q=hydrostatic%20pressure" title=" hydrostatic pressure"> hydrostatic pressure</a>, <a href="https://publications.waset.org/abstracts/search?q=aortic%20endothelial%20cells" title=" aortic endothelial cells"> aortic endothelial cells</a>, <a href="https://publications.waset.org/abstracts/search?q=transcellular%20flow" title=" transcellular flow"> transcellular flow</a> </p> <a href="https://publications.waset.org/abstracts/39927/chronic-hypertension-aquaporin-and-hydraulic-conductivity-a-perspective-on-pathological-connections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39927.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">16</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15</span> Hearing Aids Maintenance Training for Hearing-Impaired Preschool Children with the Help of Motion Graphic Tools</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Mokhtarzadeh">M. Mokhtarzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Taheri%20Qomi"> M. Taheri Qomi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Nikafrooz"> M. Nikafrooz</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Atashafrooz"> A. Atashafrooz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the present study was to investigate the effectiveness of using motion graphics as a learning medium on training hearing aids maintenance skills to hearing-impaired children. The statistical population of this study consisted of all children with hearing loss in Ahvaz city, at age 4 to 7 years old. As the sample, 60, whom were selected by multistage random sampling, were randomly assigned to two groups; experimental (30 children) and control (30 children) groups. The research method was experimental and the design was pretest-posttest with the control group. The intervention consisted of a 2-minute motion graphics clip to train hearing aids maintenance skills. Data were collected using a 9-question researcher-made questionnaire. The data were analyzed by using one-way analysis of covariance. Results showed that the training of hearing aids maintenance skills with motion graphics was significantly effective for those children. The results of this study can be used by educators, teachers, professionals, and parents to train children with disabilities or normal students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hearing%20aids" title="hearing aids">hearing aids</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20aids%20maintenance%20skill" title=" hearing aids maintenance skill"> hearing aids maintenance skill</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20impaired%20children" title=" hearing impaired children"> hearing impaired children</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20graphics" title=" motion graphics"> motion graphics</a> </p> <a href="https://publications.waset.org/abstracts/124635/hearing-aids-maintenance-training-for-hearing-impaired-preschool-children-with-the-help-of-motion-graphic-tools" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14</span> Validation of Contemporary Physical Activity Tracking Technologies through Exercise in a Controlled Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reem%20I.%20Altamimi">Reem I. Altamimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Geoff%20D.%20Skinner"> Geoff D. Skinner</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Extended periods engaged in sedentary behavior increases the risk of becoming overweight and/or obese which is linked to other health problems. Adding technology to the term &lsquo;active living&rsquo; permits its inclusion in promoting and facilitating habitual physical activity. Technology can either act as a barrier to, or facilitate this lifestyle, depending on the chosen technology. Physical Activity Monitoring Technologies (PAMTs) are a popular example of such technologies. Different contemporary PAMTs have been evaluated based on customer reviews; however, there is a lack of published experimental research into the efficacy of PAMTs. This research aims to investigate the reliability of four PAMTs: two wristbands (Fitbit Flex and Jawbone UP), a waist-clip (Fitbit One), and a mobile application (iPhone Health Application) for recording a specific distance walked on a treadmill (1.5km) at constant speed. Physical activity tracking technologies are varied in their recordings, even while performing the same activity. This research demonstrates that Jawbone UP band recorded the most accurate distance compared to Fitbit One, Fitbit Flex, and iPhone Health Application. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fitbit" title="Fitbit">Fitbit</a>, <a href="https://publications.waset.org/abstracts/search?q=jawbone%20up" title=" jawbone up"> jawbone up</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20tracking%20applications" title=" mobile tracking applications"> mobile tracking applications</a>, <a href="https://publications.waset.org/abstracts/search?q=physical%20activity%20tracking%20technologies" title=" physical activity tracking technologies"> physical activity tracking technologies</a> </p> <a href="https://publications.waset.org/abstracts/40783/validation-of-contemporary-physical-activity-tracking-technologies-through-exercise-in-a-controlled-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40783.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13</span> Relationship between Learning Methods and Learning Outcomes: Focusing on Discussions in Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jaeseo%20Lim">Jaeseo Lim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jooyong%20Park"> Jooyong Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Although there is ample evidence that student involvement enhances learning, college education is still mainly centered on lectures. However, in recent years, the effectiveness of discussions and the use of collective intelligence have attracted considerable attention. This study intends to examine the empirical effects of discussions on learning outcomes in various conditions. Eighty eight college students participated in the study and were randomly assigned to three groups. Group 1 was told to review material after a lecture, as in a traditional lecture-centered class. Students were given time to review the material for themselves after watching the lecture in a video clip. Group 2 participated in a discussion in groups of three or four after watching the lecture. Group 3 participated in a discussion after studying on their own. Unlike the previous two groups, students in Group 3 did not watch the lecture. The participants in the three groups were tested after studying. The test questions consisted of memorization problems, comprehension problems, and application problems. The results showed that the groups where students participated in discussions had significantly higher test scores. Moreover, the group where students studied on their own did better than that where students watched a lecture. Thus discussions are shown to be effective for enhancing learning. In particular, discussions seem to play a role in preparing students to solve application problems. This is a preliminary study and other age groups and various academic subjects need to be examined in order to generalize these findings. We also plan to investigate what kind of support is needed to facilitate discussions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discussions" title="discussions">discussions</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture" title=" lecture"> lecture</a>, <a href="https://publications.waset.org/abstracts/search?q=test" title=" test"> test</a> </p> <a href="https://publications.waset.org/abstracts/96933/relationship-between-learning-methods-and-learning-outcomes-focusing-on-discussions-in-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96933.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12</span> Wire Localization Procedures in Non-Palpable Breast Cancers: An Audit Report and Review of Literature</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Waqas%20Ahmad">Waqas Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Eisha%20Tahir"> Eisha Tahir</a>, <a href="https://publications.waset.org/abstracts/search?q=Shahper%20Aqeel"> Shahper Aqeel</a>, <a href="https://publications.waset.org/abstracts/search?q=Imran%20Khalid%20Niazi"> Imran Khalid Niazi</a>, <a href="https://publications.waset.org/abstracts/search?q=Amjad%20Iqbal"> Amjad Iqbal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Breast conservation surgery applies a number of techniques for accurate localization of lesions. Wire localization remains the method of choice in non-palpable breast cancers post-neoadjuvant chemotherapy. Objective: The aim of our study was to determine the accuracy of wire localization procedures in our department and compare it with internationally set protocols as per the Royal College of Radiologists. Post wire mammography, as well as the margin status of the postoperative specimen, assessed the accuracy of the procedure. Methods: We retrospectively reviewed the data of 225 patients who presented to our department from May 2014 to June 2015 post neoadjuvant chemotherapy with non-palpable cancers. These patients are candidates for wire localized lumpectomies either under ultrasound or stereotactic guidance. Metallic marker was placed in all the patients at the time of biopsy. Post wire mammogram was performed in all the patients and the distance of the wire tip from the marker was calculated. The presence or absence of the metallic clip in the postoperative specimen, as well as the marginal status of the postoperative specimen, was noted. Results: 157 sonographic and 68 stereotactic wire localization procedures were performed. 95% of the wire tips were within 1 cm of the metallic marker. Marginal status was negative in 94% of the patients in histopathological specimen. Conclusion: Our audit report declares more than 95% accuracy of image guided wire localization in successful excision of non-palpable breast lesions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=breast" title="breast">breast</a>, <a href="https://publications.waset.org/abstracts/search?q=cancer" title=" cancer"> cancer</a>, <a href="https://publications.waset.org/abstracts/search?q=non-palpable" title=" non-palpable"> non-palpable</a>, <a href="https://publications.waset.org/abstracts/search?q=wire%20localization" title=" wire localization"> wire localization</a> </p> <a href="https://publications.waset.org/abstracts/49198/wire-localization-procedures-in-non-palpable-breast-cancers-an-audit-report-and-review-of-literature" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">308</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> Need for E-Learning: An Effective Method in Educating the Persons with Hearing Impairment Using Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Vijayakumar">S. Vijayakumar</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20B.%20Rathna%20Kumar"> S. B. Rathna Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Navnath%20D%20Jagadale"> Navnath D Jagadale </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Learning and teaching are the challenges ahead in the education of the students with hearing impairment using sign language (SHISL). Either the students or teachers face difficulties in the process of learning/teaching. Communication is one of the main barriers while teaching SHISL. Further, the courses of study or the subjects are limited to SHISL at least in countries like India. Students with hearing impairment mainly opt for sign language as a communication mode. Subjects like physics, chemistry, advanced mathematics etc. are not available in the curriculum for the SHISL since their content and ideas are complex. In India, exemption for language papers is being given for the students with hearing impairment. It may give opportunity to them to secure secondary/ higher secondary qualifications. It is a known fact that students with hearing impairment are facing difficulty in their future carrier. They secure neither a higher study nor a good employment opportunity. Vocational training in various trades will land them in few jobs with few bucks in pocket. However, not all of them are blessed with higher positions in government or private sectors in competitive fields or where the technical knowledge is required. E learning with sign language instructions can be used for teaching languages and science subjects. Computer Based Instruction (CBI), Computer Based Training (CBT), and Computer Assisted Instruction (CAI) are now part-and-parcel of Modern Education. It will also include signed video clip corresponding to the topic. Learning language subjects will improve the understanding of concepts in different subjects. Learning other science subjects like their hearing counterparts will enable the SHISL to go higher in studies and increase their height to pluck a fruit of the tree of employment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=students%20with%20hearing%20impairment%20using%20sign%20language" title="students with hearing impairment using sign language">students with hearing impairment using sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20impairment" title=" hearing impairment"> hearing impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20subjects" title=" language subjects"> language subjects</a>, <a href="https://publications.waset.org/abstracts/search?q=science%20subjects" title=" science subjects"> science subjects</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning "> e-learning </a> </p> <a href="https://publications.waset.org/abstracts/41414/need-for-e-learning-an-effective-method-in-educating-the-persons-with-hearing-impairment-using-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">405</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> Co-Creation of Content with the Students in Entrepreneurship Education to Capture Entrepreneurship Phenomenon in an Innovative Way</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prema%20Basargekar">Prema Basargekar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facilitating the subject ‘Entrepreneurship Education’ in higher education, such as management studies, can be exhilarating as well as challenging. It is a multi-disciplinary and ever-evolving subject. Capturing entrepreneurship as a phenomenon in a holistic manner is a daunting task as it requires covering various dimensions such as new ideas generation, entrepreneurial traits, business opportunities scanning, the role of policymakers, value creation, etc., to name a few. Implicit entrepreneurship theory and effectuation are two different theories that focus on engaging the participants to create content by using their own experiences, perceptions, and belief systems. It helps in understanding the phenomenon holistically. The assumption here is that all of us are part of the entrepreneurial ecosystem, and effective learning can come through active engagement and peer learning by all the participants together. The present study is an attempt to use these theories in the class assignment given to the students at the beginning of the course to build the course content and understand entrepreneurship as a phenomenon in a better way through peer learning. The assignment was given to three batches of MBA post-graduate students doing the program in one of the private business schools in India. The subject of ‘Entrepreneurship Management’ is facilitated in the third trimester of the first year. At the beginning of the course, the students were given the assignment to submit a brief write-up/ collage/picture/poem or in any other format about “What entrepreneurship means to you?” They were asked to give their candid opinions about entrepreneurship as a phenomenon as they perceive it. Nearly 156 students doing post-graduate MBA submitted the assignment. These assignments were further used to find answers to two research questions. – 1) Are students able to use divergent and innovative forms to express their opinions, such as poetry, illustrations, videos, etc.? 2) What are various dimensions of entrepreneurship which are emerging to understand the phenomenon in a better way? The study uses the Brawn and Clark framework of reflective thematic analysis for qualitative analysis. The study finds that students responded to this assignment enthusiastically and expressed their thoughts in multiple ways, such as poetry, illustration, personal narrative, videos, etc. The content analysis revealed that there could be seven dimensions to looking at entrepreneurship as a phenomenon. They are 1) entrepreneurial traits, 2) entrepreneurship as a journey, 3) value creation by entrepreneurs in terms of economic and social value, 4) entrepreneurial role models, 5) new business ideas and innovations, 6) personal entrepreneurial experiences and aspirations, and 7) entrepreneurial ecosystem. The study concludes that an implicit approach to facilitate entrepreneurship education helps in understanding it as a live phenomenon. It also encourages students to apply divergent and convergent thinking. It also helps in triggering new business ideas or stimulating the entrepreneurial aspirations of the students. The significance of the study lies in the application of implicit theories in the classroom to make higher education more engaging and effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=co-creation%20of%20content" title="co-creation of content">co-creation of content</a>, <a href="https://publications.waset.org/abstracts/search?q=divergent%20thinking" title=" divergent thinking"> divergent thinking</a>, <a href="https://publications.waset.org/abstracts/search?q=entrepreneurship%20education" title=" entrepreneurship education"> entrepreneurship education</a>, <a href="https://publications.waset.org/abstracts/search?q=implicit%20theory" title=" implicit theory"> implicit theory</a> </p> <a href="https://publications.waset.org/abstracts/160834/co-creation-of-content-with-the-students-in-entrepreneurship-education-to-capture-entrepreneurship-phenomenon-in-an-innovative-way" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160834.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">74</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Third Eye: A Hybrid Portrayal of Visuospatial Attention through Eye Tracking Research and Modular Arithmetic </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shareefa%20Abdullah%20Al-Maqtari">Shareefa Abdullah Al-Maqtari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruzaika%20Omar%20Basaree"> Ruzaika Omar Basaree</a>, <a href="https://publications.waset.org/abstracts/search?q=Rafeah%20Legino"> Rafeah Legino</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A pictorial representation of hybrid forms in science-art collaboration has become a crucial issue in the course of exploring a new painting technique development. This is straight related to the reception of an invisible-recognition phenomenology. In hybrid pictorial representation of invisible-recognition phenomenology, the challenging issue is how to depict the pictorial features of indescribable objects from its mental source, modality and transparency. This paper proposes the hybrid technique of painting Demonstrate, Resemble, and Synthesize (DRS) through a combination of the hybrid aspect-recognition representation of understanding picture, demonstrative mod, the number theory, pattern in the modular arithmetic system, and the coherence theory of visual attention in the dynamic scenes representation. Multi-methods digital gaze data analyses, pattern-modular table operation design, and rotation parameter were used for the visualization. In the scientific processes, Eye-trackingvideo-sections based was conducted using Tobii T60 remote eye tracking hardware and TobiiStudioTM analysis software to collect and analyze the eye movements of ten participants when watching the video clip, Alexander Paulikevitch’s performance’s ‘Tajwal’. Results: we found that correlation of fixation count in section one was positively and moderately correlated with section two Person’s (r=.10, p < .05, 2-tailed) as well as in fixation duration Person’s (r=.10, p < .05, 2-tailed). However, a paired-samples t-test indicates that scores were significantly higher for the section one (M = 2.2, SD = .6) than for the section two (M = 1.93, SD = .6) t(9) = 2.44, p < .05, d = 0.87. In the visual process, the exported data of gaze number N was resembled the hybrid forms of visuospatial attention using the table-mod-analyses operation. The explored hybrid guideline was simply applicable, and it could be as alternative approach to the sustainability of contemporary visual arts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=science-art%20collaboration" title="science-art collaboration">science-art collaboration</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20forms" title=" hybrid forms"> hybrid forms</a>, <a href="https://publications.waset.org/abstracts/search?q=pictorial%20representation" title=" pictorial representation"> pictorial representation</a>, <a href="https://publications.waset.org/abstracts/search?q=visuospatial%20attention" title=" visuospatial attention"> visuospatial attention</a>, <a href="https://publications.waset.org/abstracts/search?q=modular%20arithmetic" title=" modular arithmetic"> modular arithmetic</a> </p> <a href="https://publications.waset.org/abstracts/78997/third-eye-a-hybrid-portrayal-of-visuospatial-attention-through-eye-tracking-research-and-modular-arithmetic" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78997.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">364</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> On the Weightlessness of Vowel Lengthening: Insights from Arabic Dialect of Yemen and Contribution to Psychoneurolinguistics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sadeq%20Al%20Yaari">Sadeq Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Alkhunayn"> Muhammad Alkhunayn</a>, <a href="https://publications.waset.org/abstracts/search?q=Montaha%20Al%20Yaari"> Montaha Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Al%20Yaari"> Ayman Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Aayah%20Al%20Yaari"> Aayah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Adham%20Al%20Yaari"> Adham Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajedah%20Al%20Yaari"> Sajedah Al Yaari</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatehi%20Eissa"> Fatehi Eissa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: It is well established that lengthening (longer duration) is considered one of the correlates of lexical and phrasal prominence. However, it is unexplored whether the scope of vowel lengthening in the Arabic dialect of Yemen (ADY) is differently affected by educated and/or uneducated speakers from different dialectal backgrounds. Specifically, the research aims to examine whether or not linguistic background acquired through different educational channels makes a difference in the speech of the speaker and how that is reflected in related psychoneurolinguistic impairments. Methods: For the above mentioned purpose, we conducted an articulatory experiment wherein a set of words from ADY were examined in the dialectal speech of thousand and seven hundred Yemeni educated and uneducated speakers aged 19-61 years growing up in five regions of the country: Northern, southern, eastern, western and central and were, accordingly, assigned into five dialectal groups. A seven-minute video clip was shown to the participants, who have been asked to spontaneously describe the scene they had just watched before the researchers linguistically and statistically analyzed recordings to weigh vowel lengthening in the speech of the participants. Results: The results show that vowels (monophthongs and diphthongs) are lengthened by all participants. Unexpectedly, educated and uneducated speakers from northern and central dialects lengthen vowels. Compared with uneducated speakers from the same dialect, educated speakers lengthen fewer vowels in their dialectal speech. Conclusions: These findings support the notion that extensive exposure to dialects on account of standard language can cause changes to the patterns of dialects themselves, and this can be seen in the speech of educated and uneducated speakers of these dialects. Further research is needed to clarify the phonemic distinctive features and frequency of lengthening in other open class systems (i.e., nouns, adjectives, and adverbs). Phonetic and phonological report measures are needed as well as validation of existing measures for assessing phonemic vowel length in the Arabic population in general and Arabic individuals with voice, speech, and language impairments in particular. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vowel%20lengthening" title="vowel lengthening">vowel lengthening</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20dialect%20of%20Yemen" title=" Arabic dialect of Yemen"> Arabic dialect of Yemen</a>, <a href="https://publications.waset.org/abstracts/search?q=phonetics" title=" phonetics"> phonetics</a>, <a href="https://publications.waset.org/abstracts/search?q=phonology" title=" phonology"> phonology</a>, <a href="https://publications.waset.org/abstracts/search?q=impairment" title=" impairment"> impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=distinctive%20features" title=" distinctive features"> distinctive features</a> </p> <a href="https://publications.waset.org/abstracts/186326/on-the-weightlessness-of-vowel-lengthening-insights-from-arabic-dialect-of-yemen-and-contribution-to-psychoneurolinguistics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186326.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">40</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nassima%20Noufail">Nassima Noufail</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Bouhali"> Sara Bouhali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=video%20segmentation" title="video segmentation">video segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20detection" title=" action detection"> action detection</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=Kmeans" title=" Kmeans"> Kmeans</a>, <a href="https://publications.waset.org/abstracts/search?q=C3D" title=" C3D"> C3D</a> </p> <a href="https://publications.waset.org/abstracts/162586/the-application-of-video-segmentation-methods-for-the-purpose-of-action-detection-in-videos" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162586.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> N-Glycosylation in the Green Microalgae Chlamydomonas reinhardtii </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pierre-Louis%20Lucas">Pierre-Louis Lucas</a>, <a href="https://publications.waset.org/abstracts/search?q=Corinne%20Loutelier-Bourhis"> Corinne Loutelier-Bourhis</a>, <a href="https://publications.waset.org/abstracts/search?q=Narimane%20Mati-Baouche"> Narimane Mati-Baouche</a>, <a href="https://publications.waset.org/abstracts/search?q=Philippe%20Chan%20Tchi-Song"> Philippe Chan Tchi-Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Patrice%20Lerouge"> Patrice Lerouge</a>, <a href="https://publications.waset.org/abstracts/search?q=Elodie%20Mathieu-Rivet"> Elodie Mathieu-Rivet</a>, <a href="https://publications.waset.org/abstracts/search?q=Muriel%20Bardor"> Muriel Bardor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> N-glycosylation is a post-translational modification taking place in the Endoplasmic Reticulum and the Golgi apparatus where defined glycan features are added on protein in a very specific sequence Asn-X-Thr/Ser/Cys were X can be any amino acid except proline. Because it is well-established that those N-glycans play a critical role in protein biological activity, protein half-life and that a different N-glycan structure may induce an immune response, they are very important in Biopharmaceuticals which are mainly glycoproteins bearing N-glycans. From now, most of the biopharmaceuticals are produced by mammalian cells like Chinese Hamster Ovary cells (CHO) for their N-glycosylation similar to the human, but due to the high production costs, several other species are investigated as the possible alternative system. In this purpose, the green microalgae Chlamydomonas reinhardtii was investigated as the potential production system for Biopharmaceuticals. This choice was influenced by the facts that C. reinhardtii is a well-study microalgae which is growing fast with a lot of molecular biology tools available. This organism is also producing N-glycan on its endogenous proteins. However, the analysis of the N-glycan structure of this microalgae has revealed some differences as compared to the human. Rather than in Human where the glycans are processed by key enzymes called N-acetylglucosaminyltransferase I and II (GnTI and GnTII) adding GlcNAc residue to form a GlcNAc₂Man₃GlcNAc₂ core N-glycan, C. reinhardtii lacks those two enzymes and possess a GnTI independent glycosylation pathway. Moreover, some enzymes like xylosyltransferases and methyltransferases not present in human are supposed to act on the glycans of C. reinhardtii. Furthermore, the recent structural study by mass spectrometry shows that the N-glycosylation precursor supposed to be conserved in almost all eukaryotic cells results in a linear Man₅GlcNAc₂ rather than a branched one in C. reinhardtii. In this work, we will discuss the new released MS information upon C. reinhardtii N-glycan structure and their impact on our attempt to modify the glycan in a Human manner. Two strategies will be discussed. The first one consisted in the study of Xylosyltransferase insertional mutants from the CLIP library in order to remove xyloses from the N-glycans. The second will go further in the humanization by transforming the microalgae with the exogenous gene from Toxoplasma gondii having an activity similar to GnTI and GnTII with the aim to synthesize GlcNAc₂Man₃GlcNAc₂ in C. reinhardtii. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chlamydomonas%20reinhardtii" title="Chlamydomonas reinhardtii">Chlamydomonas reinhardtii</a>, <a href="https://publications.waset.org/abstracts/search?q=N-glycosylation" title=" N-glycosylation"> N-glycosylation</a>, <a href="https://publications.waset.org/abstracts/search?q=glycosyltransferase" title=" glycosyltransferase"> glycosyltransferase</a>, <a href="https://publications.waset.org/abstracts/search?q=mass%20spectrometry" title=" mass spectrometry"> mass spectrometry</a>, <a href="https://publications.waset.org/abstracts/search?q=humanization" title=" humanization"> humanization</a> </p> <a href="https://publications.waset.org/abstracts/88988/n-glycosylation-in-the-green-microalgae-chlamydomonas-reinhardtii" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> MicroRNA Drivers of Resistance to Androgen Deprivation Therapy in Prostate Cancer</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Philippa%20Saunders">Philippa Saunders</a>, <a href="https://publications.waset.org/abstracts/search?q=Claire%20Fletcher"> Claire Fletcher</a> </p> <p class="card-text"><strong>Abstract:</strong></p> INTRODUCTION: Prostate cancer is the most prevalent malignancy affecting Western males. It is initially an androgen-dependent disease: androgens bind to the androgen receptor and drive the expression of genes that promote proliferation and evasion of apoptosis. Despite reduced androgen dependence in advanced prostate cancer, androgen receptor signaling remains a key driver of growth. Androgen deprivation therapy (ADT) is, therefore, a first-line treatment approach and works well initially, but resistance inevitably develops. Abiraterone and Enzalutamide are drugs widely used in ADT and are androgen synthesis and androgen receptor signaling inhibitors, respectively. The shortage of other treatment options means acquired resistance to these drugs is a major clinical problem. MicroRNAs (miRs) are important mediators of post-transcriptional gene regulation and show altered expression in cancer. Several have been linked to the development of resistance to ADT. Manipulation of such miRs may be a pathway to breakthrough treatments for advanced prostate cancer. This study aimed to validate ADT resistance-implicated miRs and their clinically relevant targets. MATERIAL AND METHOD: Small RNA-sequencing of Abiraterone- and Enzalutamide-resistant C42 prostate cancer cells identified subsets of miRs dysregulated as compared to parental cells. Real-Time Quantitative Reverse Transcription PCR (qRT-PCR) was used to validate altered expression of candidate ADT resistance-implicated miRs 195-5p, 497-5p and 29a-5p in ADT-resistant and -responsive prostate cancer cell lines, patient-derived xenografts (PDXs) and primary prostate cancer explants. RESULTS AND DISCUSSION: This study suggests a possible role for miR-497-5p in the development of ADT resistance in prostate cancer. MiR-497-5p expression was increased in ADT-resistant versus ADT-responsive prostate cancer cells. Importantly, miR-497-5p expression was also increased in Enzalutamide-treated, castrated (ADT-mimicking) PDXs versus intact PDXs. MiR-195-5p was also elevated in ADT-resistant versus -responsive prostate cancer cells, while there was a drop in miR-29a-5p expression. Candidate clinically relevant targets of miR-497-5p in prostate cancer were identified by mining AGO-PAR-CLIP-seq data sets and may include AVL9 and FZD6. CONCLUSION: In summary, this study identified microRNAs that are implicated in prostate cancer resistance to androgen deprivation therapy and could represent novel therapeutic targets for advanced disease. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=microRNA" title="microRNA">microRNA</a>, <a href="https://publications.waset.org/abstracts/search?q=androgen%20deprivation%20therapy" title=" androgen deprivation therapy"> androgen deprivation therapy</a>, <a href="https://publications.waset.org/abstracts/search?q=Enzalutamide" title=" Enzalutamide"> Enzalutamide</a>, <a href="https://publications.waset.org/abstracts/search?q=abiraterone" title=" abiraterone"> abiraterone</a>, <a href="https://publications.waset.org/abstracts/search?q=patient-derived%20xenograft" title=" patient-derived xenograft"> patient-derived xenograft</a> </p> <a href="https://publications.waset.org/abstracts/159310/microrna-drivers-of-resistance-to-androgen-deprivation-therapy-in-prostate-cancer" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=candid%20clip&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=candid%20clip&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10