Guillermo Angeris and Lucy Li - web stanford edu

Guillermo Angeris and Lucy Li I. INTRODUCTION From chess to Go to 2048, AI solvers have ex-ceeded humans in game playing. However, much of the progress in game playing algorithms have centered around perfect information games, where players’ pos-sible moves are visible to each other. We explored possible solutions that run in tractable time ...


Texto en PDF


CS221ProjectFinal:DominAIGuillermoAngerisandLucyLiI.INTRODUCTIONFromchesstoGoto2048,AIsolvershaveex-ceededhumansingameplaying.However,muchoftheprogressingameplayingalgorithmshavecenteredaroundperfectinformationgames,whereplayers'pos-siblemovesarevisibletoeachother.WeexploredpossiblesolutionsthatrunintractabletimeforLatin-Americandominoes,animperfectinformationgame.Thisversionofdominoesisfour-player,team-basedandzero-sumwithrelativelysimplerules,makingititidealforattackwithmodernalgorithmicandapproxi-mationtools.Weexpecthumanstoplayatleastaswellasgreedy—pickingthetilewiththegreatestnumberofpipsintheirhandwhichisabletobeplayed.Mostnoviceplayersusethisstrategy,whichdoesnotcon-siderthetilesthatotherplayersarelikelytohaveandusethatinformationtoblockopponents.Weexpecthumantodoatmostaswellaspure-strategyoptimal,whichisminimax/negamax.Wedevelopedtwonegamax-basedalgorithmstoap-proximateanorderingovergivenmovesforthecom-puterplayer.Bothalgorithmsusetheprobabilitiesofallplayertilesontheboardtoweighevaluationscores—ingeneral,itshouldbenotedthatthegamesuffersheavilyfromcombinatorialexplosionthroughoutthecrucialopeningrounds.II.RELATEDWORKSTherehasbeensomeworkonreducingtheproblemofimperfectinformationtomanyproblemsofperfectinformation,mostfamouslyinthePerfectInformationMonteCarlo(PIMC)algorithm[1],aswellasasimilarMonteCarloapproachusedforcontractbridge[2].Otheralgorithmshavebeenproposed[3],[4]whicheitherperformsimilarreductionsorwhichcansolveforexactNashequilibria.DuetothegamebeingrelativelyunknowninmostplaceswhereAIresearchiscommon,thereisadearthofAI/MLworkspecicallyfocusingondominoes.Onlyafewrelativelyoldpaperswhichtouchuponthesubjectofreinforcementlearningforstrategies[5]in(comparatively)low-powercomputersreallyexist.Therearealsosometextsthatrefertothetwo-playervariantofthedominogame[6],butthereisalotlessinformationinthisgamerelativetothefour-playeroneinquestion,as28dominoesaredistributedoverfourplayersandweonlyobserve7ofthemintheinitialgame.Therearesomefurther(non-academic)refer-encestodominoAI,thoughthespecicationseemstousearelativelynaiveAIwhichdoesnotinferotherplayers'dominoes1andisthereforeunlikelytobecompetitive.III.TASKDEFINITIONGiventhecurrentstateofadominoesgame,ouragentaimstoreturntheoptimalmoveinordertobeatitsopponentswhilehelpingitspartner.A.GameDetailsThegamehasthefollowingrules.LetN=f1;2;3;4gbethesetofplayers,withsomepartneringff1;3g;f2;4gg,andletSbethesetof28uniquedominoes,whichisthedoublesixset,e.g.eachdominoisasetoftheformSij=fi;jg;0i;j6andicanbeequaltoj.Thetotalnumberofpipsofadominoisdenedasp(Sij)=i+j.Playerssitinacirclewithteammembersfacingeachother,andplaycounter-clockwise.Thegamebeginswithsettingauniformpermutationofthedominoes(Pij=(Sij))andallowingplayeritopickdominoes2fP8(i�1)+1;P8(i�1)+2;:::;P8igfromthepermutation.Inotherwords,playerseachstartaroundwith7randomlyallocatedtiles.Therstmoveoftherstroundmustbemadebytheplayerwiththef6;6gdomino.3andtheplayiscontinuedtothe`left,'inparticulartoi!i+1and4!1.Additionally,therstmoveofthenextroundismadebythepersontothe`left'oftherstplayerinthecurrentroundand1ThemainexampleforasimpleAIwasfoundinhttps://github.com/carlosmccosta/Dominoes/whichwaslikelyaclassprojectandalsoseemedtobefocusedonOOPasaprogrammingparadigm,ratherthantheplayerAIitself.2Hereweusethenaturalisomorphismfi;jg7!kwhere1k28forsimplicity.3Weuserepetitionofanumberinsetsforsymmetry.Thisisnotstrictlynecessaryasf6;6g=f6g.
thisrulecontinuestobeappliedeveryrounduntiltheendofthegame.Attheirturn,aplayercanonlyperformthreepos-sibleactions:(1)putdownapiecewithanendpointmatchingoneoftheendpointsofthecurrentboard,(2)passiftheplayerhasnomovestheycanmake(e.g.that(1)cannotbecompleted),or(3)callthataplayerhaspassedeventhoughtheycouldhaveplayed.Inthelattercase,iftheplayercallsthefoulcorrectly,theroundendsand50pointsareaddedtothescoreoftheoffendingteam.Eachroundcanalsoendwhenoneplayerhasrunoutofdominoesorwhennoplayercanputdownalegaldomino.Intheformercase,theteamoftheplayerwhohasrunoutofdominoeswinstheroundandtheopposingteamhastoaddthecurrentnumberofpipsontheirhandtotheirscore.Inthelattercase,theteamthathasthehighestnumberofpipsintheirhandlosesandmustaddthatnumbertotheirtotalscore.Therstteamtogetto100pointslosesandthegameends.Moremathematically,inagivenround,werepresenttheboardBasaanorderedtupleofthemovesperformedbyeachplayerandtheplayerwhobegantheround.PASSidentiestheplayerhaspassedandCALLidentiesthattheplayercalled.Otherwise,themoveisjustthedominothattheplayerputdownsuchthattheboardinanygivenroundisgivenbyatupleofdominoes,Pij.IfadominobeplacedateitherendofPij,thentherearetwopossiblemoves:onewherethedominoisplacedattheiendandtheotherwhereitisplacedatthejend.B.EvaluationIdeally,theperformanceofanAIagentshouldbeevaluatedbasedonhowmanyroundsitwinsagainstexperiencedhumanplayers.However,duetolimitedtime,people,andresources,thebulkofourresultsrelyonplaying100+gamesofouradvancedalgorithmsagainstourbaselinewithdifferentparameters.C.InfrastructureandGameRepresentationWebuiltgameenginesthatallowhumanstoinputmovesandAIstoplayagainsteachother.Theseframeworkstrackedmovesmadebyplayersaswellasoutputtinggamestatisticssuchastileprobabilities,wins,andnalpipsumsperteam.4Thegame'sstateisdenedbythecurrentopenendsonthetable,theplaysthathavebeenmadesofarbyeachplayer,andthetilesstillnotplayed.Forit4Seehttps://github.com/guillean/DominAI.tointelligentlyassessthestateofthegame,theAIplayeralsotracksandupdatestheprobabilitiesofeachplayerhavingatileinthegame.Forexample,atthebeginning,all21tilesnotintheAI'shandshavea1/3probabilityofbeingineachoftheotherplayer'shands.Whenaplayerpasses,weknowthatthisplayerhaszeroprobabilityofhavinganytilewithavalueequaltoanyoftheopenendsonthetable.So,wecanrenormalizetheprobabilitiespertainingtotheotherplayersforthosetiles.Whenaplayerputsdownatile,weupdatetheprobabilityofthattiletobe1forthatplayer,and0forallotherplayers.Weadditionallyhavetoupdatetheprobabilitiesofallothertiles,becausethereisahigherprobabilityofanytilebelongingtoaplayerwithmoreunplayedtilesthanonewithfewer.Inotherwords,aplayer'smovenotonlychangestheprobabilityofthetileinquestionbutalsohowremainingunknowntilesarelikelyallocatedamongplayers.IV.APPROACHThesimpleststrategyistoplayinagreedymanner;e.g.ifthereareanymovesthatcanbeplayed,thecurrentplayersplaysthedominowiththemostpipsintheirhand.Manybeginnersplayinthiswayandtheoptimalstrategycoincideswiththegreedyoneinlaterpartsofthegame,butthisstrategyoftenfailsearly5onasoptimalmovesintheearlygamearen'talwaystheoneswithhighestvalue.Weexpect(andthereforeassumefromhereonout)thatplayersdonoworsethanplayinggreedy.Themostoptimalstrategyintermsofpureactions6istominimax/negamaxtherestoftheplayersthatarenotonyourteam,whilekeepingtrackofallpossiblesetsofdominosthathavebeenobserved.Theideaforapproximatingsolutionstothedominogameproblemistoconverttheoriginal,partialinfor-mationproblemintoadecompositionofmanydeter-ministic,completeinformationgames.Notethatthiscaseisstrictlyalowerboundtothebestpossiblestrategyifthereisnorandomizationonthepartoftheopposingplayers.75Asimpleconstructionwherethisisthecaseisfoundinthebook[7]orinhttps://gists.github.com/guilleanunder`domino
counter
example.txt'.6Ingeneral,Nashequilibriainimperfectinformationgamesaremixed—thatis,theyspecifyaprobabilitydistributionoversomesupportwithmorethanoneelement.7Thisisbecausenegamaxopponentsalwaysknowthecorrecthandoftheplayer,buttheplayeristakingexpectationsbysamplingfromthedistribution.
A.NegamaxAsmentionedearlier,themostoptimalpurestrat-egyisminimax:assumethattheopponentisplayingoptimallyandalternatebetweenminimizingthevalueovertheopponent'sactionsandmaximizingthevalueoverourown.Usingthenegamaxvariantofthisrecurrencesimpliestheproblemandimprovesruntimeoverstandardminimax[8].Negamaxisusedforzero-sum,two-playergames.Dominoesisazero-sumgamebecausethevalueofoneplayer'steam'sscoreisthenegationofthatoftheopponent.Thoughourversioninvolvesfourplayers,pairsofplayerscooperateinteamsandduetothenalscoreofagivenplayerbeingsymmetricoverthetotalteam'sscore,eachteamcanbeseenasonelarge,colludingplayerplayingoptimally.Thelatterholdsbe-causeweareassumingthatwepickedaparticularsetofdominoeswhichiscommonknowledge(aworst-caseassumption,sinceeverysampleassumestheopponentknowstheexacthandofthecomputeroneveryround).Ourevaluationfunctionforthecurrentround(assum-ingearlyterminationofthesearch)istheexpecteddif-ferenceintotaldominoesbetweenthecurrentplayer'steamandtheopposingplayer'steam;ingeneral,sinceweseektomaximizethedifferencebetweenthemarginofplayer.Wealsoconsideredusingamodiedevalu-ationfunction,whichincludedafeaturecorrespondingtothedifferenceinnumberofplayedpiecesforeachteam,butthisdidnotseemtoimproveperformance.B.ImperfectMinimaxSearchMinimaxanditsnegamaxvariantareapplicabletoperfectinformationgames.Gamesofimperfectinformationsuchasdominoesaregamesinwhichsomeplayershaveinformationthatotherplayersareunabletosee,thoughthegame'sstructureandpayoffsarecommonknowledge.Inourspeciccase,wedonotknowthetilesthatotherplayershave,andthatuncertaintymustbehandledinsomeway.Usingtheusualideaofminimaxandallowingpos-siblemovestobediscountedbytheircurrentprob-abilities,wearriveatasimpleheuristicforapproxi-matingtheorderofsomegivendominosplayusingtechniquesforperfectinformationgames.WecallthisapproximationtheImperfectMinimaxSearch(IMS)(Algorithm1).Notethat,hereG=currentgame,p=currentplayer,d=depth,PG(q;p)isaprobabilitydistributionoverdominoesqandplayerspgivenin-formationknownbythecurrentplayer8.Wedenesupp(p)fe2Sjp(e)�0gasthesupportofprobabilitydistributionpoversomesetS.
Algorithm1ImperfectMinimaxSearch
1:procedureIMS(G;p;d)2:ifGisnishedord=0then3:returnEvaluate(G;p)4:endif5:smax �16:form2supp(PG(jp))s.t.misvalidinGdo7:q PG(mjp)8:G0 Gupdatedwithmplayedbyp9:p0 nextplayerafterpplaysminG10:smax maxfsmax;�qIMS(G0;p0;d�1)g11:endfor12:returnsmax13:endprocedure
IMSismotivatedbyafewnotions:1)Eachmove'sscoreshouldbediscountedbytheprobabilityofbeingpossible—leadingtoanotionof`mostlikelymoves'2)Thisreductionallowstheuseofalpha-betaprun-ingandalsoallowsapproximationtechniquesofcomplete-informationgameswithouttheneedtoresorttoexpensivesamplingalgorithms.Weiterativelydeepenthesearcheveryn=4plays,wheren=#ofturnsplayedsofarinthegame,inanexponentialfashion(asthenumberofpossiblemovesexponentiallydecreased)andleaveitsformasasetoftunableparameters.Thedepth'sfunctionalformisD= 2 bn
4cC.PerfectInformationMonteCarlo(PIMC)Asdescribedin[9],PIMCisanapproximationtechniqueforimperfectinformationgameswhichre-ducesthegameintoasamplingovermanyperfectinformationgames.Inparticular,PIMCsamplesoverpossibleboardstatesgiventhecurrentinformationandsolvesthecorrespondingperfect-informationgameasifallplayershadcompleteinformationabouteachother'shands.Bysamplingandthentakingtheexpectationoverthesesamples,wegetalowerboundoftheexpectationvalueofagivenmove.8Inotherwords,thegameGisdenedthroughthe`playerwhoseeyeswe'relookingthrough.'
Ingeneral,thistechniqueisquiteparanoid:itas-sumesthateveryplayeroverwhosehandswe'resam-plingactuallyknowstheexacthandoftheplayerwhoismakingtheplays.Thisisoftengoodenoughinindividualgames,butinteam-basedgames,thisassumptioncanuctuatewildly—thiscanbeintuitivelyobservedbynotingthat,inthe`completeinformation'game,yourpartneralsoknowswhatyouhave,thusisabletomakemovesthatthetruepartnerwon'tbeabletoinfer.Thisassumptionisthereforebrokenthroughout.InordertocomparePIMCwithIMS,weimplementthesimplesamplingschemepresentedinAlgorithm2,whereweallowNbethesetofplayersforthegame(inthiscase,wetakeN=f1;2;3;4g).
Algorithm2SamplingDominoHands
1:procedureSAMPLING(G;N)2:D unplayeddominoesingameG3:Q emptymapN!2D4:ford2Ddo5:p samplingfromPG(d;)6:Q[p]appendsd7:endfor8:endprocedure
D.Oracle&BaselineAsourbaseline,weimplementedthesimpleststrat-egy,whichisagreedyone.Whenpartneredwithanotherhumanandplayingagainsttwohumans,theAI'steamwon5outof10rounds.Manyoftheparticipantsinthisshortexperimentwerebeginners,sowewouldexpectgreedytohaveahigherlosingrateagainstmoreexperiencedhumans.Weusethisbaselinelateronasperformancecomparison,sothatwecanrunmanyroundsofthegameinashortperiodoftime.Ouroracleisaskilledplayerwhoknowseveryone'spiecesduringthegame,essentiallycheating.Inthiscase,aslongastheplayerusestheoptimalstrategywedescribedearlier,theycanguaranteethatthepossiblelossineachplayisminimized.Theonlywayforthisstrategytoloseisiftheplayerisdealtaverybadhandatthebeginningofthegame.Wecanimplementapseudo-oracleusinganall-knowingnegamax,wheretheprobabilitiesofeachplayerhavingadominowouldbesettoeitherzeroorone,sincewewouldthenhaveperfectcertaintywithwhohaswhattiles.Ateamofnegamax”oracle”playerswins91andloses9gamesoutof100againstgreedy
winslossesties

128684
51/3*66313
41/363370
61/2
Fig.1.IMS'srecordwhenplayingagainstagreedyalgorithm.Therstrowisoutof200games,andthelasttwooutof100games.Asteriskindicatesamodiedevaluationfunction,asdiscussedintheNegamaxsection.playerswith =5and =3fordepth.Inlosinggames,nomatterwhatmovenegamaxplayersmake,theevaluationscoreisnegative.Thegapbetweenouroracleandbaselineisstraight-forward:anagentdoesn'tknowwhatpiecesanyoftheotherplayershave,butshouldtrytogainandusethatinformationtoitsadvantage.NoneofourAIplayers,greedyoradvanced,passwhentheycanmakealegalmoveasthepenaltyforunnecessarypassingissohigh.V.ANALYSIS&DISCUSSIONA.ImperfectMinimaxSearchAteamofIMSplayerswhoserstmovesweregreedyandhaddepthparameters =6and =1=2won40(70.1%),lost16(28.1%),andtied1(1.8%)gamesagainstateamofgreedyplayers.Withoutagreedyrstmoveandplayingatotalof100gamesforeachparametersetting,wehavetheresultsseeninFigure1.9Thealgorithm,forallofitssimplicity,performswellagainstnon-trivialopponents.Itissomewhatinsensitivetotheparametersofthedepth,leadingustobelievethattheevaluationfunction(inourcase,thedifferenceoftheexpectationofthesumofpipsbetweenbothplayers)couldbefurtherimproved.However,neitherofthesimpleevaluationfunctionswecameupwithseemedtogiveasignicantboostrelativetooneanother.B.PerfectInformationMonteCarlo(PIMC)ThewinpercentageachievedbyPIMCrivaledthatofIMS,thoughtendedtobeafewpercentagepointshigher(Figure2).Togetasenseofhowthealgorithmbehaveswithhumans,PIMCplayedafewgameswithdifferent9Gamelogsforthegamesshowncanbefoundathttps://github.com/guillean/DominAI/tree/master/results
textswiththeprexes”ims
”forIMS,sample#forPIMC,”oracle”forthenegamaxoracle,and”smart
smart”forPIMCandIMSplayingagainsteachother.
winslossesties
#ofsamples
73261
41/310068320
41/35069274
81/22070291
61/320
Fig.2.PIMC'srecordwhenplayingagainstagreedyalgorithm,outof100gameswithdifferentparameters.
pipsleft
#ofsamples
14
81/2206
61/3206
61/34014
61/34015
61/340
Fig.3.PIMC'sgamesagainsthumanplayers.Eachrowisasinglegame.parametersagainstateamconsistingofabeginnerandanadvancedplayer.TheAIteamlosteverygame,andapproachedsometurnsinunusualways(Figure3).Forexample,inonegame,theAIplayerplayedthe(6,6)dominoneartheendofthegame,andinearlierchanceswhenitcouldhaveplayedit,itplayedotherpiecescontaining6instead.Itisgenerallywisertoplaythe(6,6)earlysinceitspipsumissohighandcanonlybeputdownona6end,unlikeanyothernon-doublepiece.ItmaybethatthePIMCplayerthoughtthatplacingthe(6,6)wouldgiveitsopponentstheopportunitytoplacetheirhighestpiecescontaininga6.AteamofPIMCplayerswon55,lost43,andtied2gamesoutof100againstateamofIMSplayers,with =5, =1=3,and#ofsamples=50.Thisconrmsthatthetwoalgorithmsseemtobeonparwithoneanother,withPIMCbeingslightlybetter.VI.FUTURESTEPSWerstnotedthatbothIMSandPIMCarequiteinsensitivetothe ; parametersofthesearch.Weinterpretedthistomeanthat,thoughthisshouldnothaveagreatimpactespeciallysoearlyoninthegame(inwhichmostpiecesareuncertain),itislikelythatourevaluationfunctionleavesquiteabittobedesired.10Inparticular,afunctionincorporatingmoreexpertknowl-edgemightdoabitbetterthanthesimplemaximize10Infact,wenotedthatouroracleperformedmarginallyworsewhenincreasingsearchdepth,whichshouldcertainlynotbethecase.Thiscouldbeattributedtotheprevious,oritcouldalsobeattributedtothemachineover-estimatinggreedyplayers'abilities.(opponentpips-teampips)ateverypointintime.Ad-ditionally,itmightbeworthinvestigatingafunctionalestimatorfortheexpectedvalueofaboardposition;e.g.isitpossibletotrainanyfunctionalestimatorinordertoevaluatetheexpectedvalueofaboardforaplayergivenherdominoes?Probabilisticallyspeaking,thisisofcoursethecaseifweassumeuniformdrawingfromthepossibledistributionofdominoes(whichwedointhecurrentformofthegame),butifweassumethatplayersareplayingcarefully,thenthisassumptioniswildlywrong,leadingustoaweakerprobabilisticupdatethanshouldactuallybethecase.11Thisimplies,then,thattheexpectationthatwearecomputingforagivenmoveisnotasaccurateasonewouldthink.Inthesamevein,itmightalsobeworthimple-mentingaparameter(aswiththepreviouscases, 2[0;1])statinghowlikelyitisforaplayertodrawfromauniformdistributionvs.amini-maxapproachandadjustingtheparameteraccordinglyasthegamecontinues.Thus,ourBayesianupdatewouldthenbeweightedbysuchafactorandwouldallowapotentiallybetterdeductionofwhataplayer'stilesarebasedontheircurrentsetofmoves,makingtheexpectationtakenovertilesandobservedinformationmoreaccurate.Here servesasa`playinglevel'foragivenplayer;thecloserto1,themoreminimax-optimalthisplayer'sdecisionsare.Notethatassumingthat =0(i.e.ourupdateisuniformoveralltiles)forallopposingplayersisequivalenttothealgorithmpresentedinthispaper.Additionally,anothersimpleimprovementwouldbetoimprovethespeedofminimaxsearchusingMonte-Carlotreesearchinordertobeabletoexplorethetreemoredeeplythanisthecasehere.Sincetheresultsdonotneedtobethataccurate,weexpectthatthismightimprovethealgorithmbyadecentamountifimplementedcorrectly,butquickexperimentsincreasingthedepthofbothIMSandPIMCshowlittleimprovementintheoverallgameplaying,sothischangemayonlybeapparentatreallylargedepthswhichareinfeasibleinthecurrentimplementation.REFERENCES[1]J.R.Long,N.R.Sturtevant,M.Buro,andT.Furtak,“Un-derstandingtheSuccessofPerfectInformationMonteCarloSamplinginGameTreeSearch,”Proc.Assoc.Adv.Artif.Intell.,pp.134–140,2010.[2]M.L.Ginsberg,“GIB:Imperfectinformationinacomputa-tionallychallenginggame,”JournalofArticialIntelligenceResearch,vol.14,pp.313–368,2001.11Wetake`weaker'tomeanmoreuniformoversomesupport.Amoremathematicaldenitioncouldbe,say,withhigherproba-bilisticentropythoughweusethephraseloosely,here.
[3]D.KollerandA.Pfeffer,“GeneratingandSolvingImperfectInformationGames,”Proc.14thInt.JointConf.Artif.Intell.,vol.14,pp.1185–1193,1995.[4]N.BurchandM.Bowling,“CFR-D:SolvingImperfectInformationGamesUsingDecomposition,”arXivpreprintarXiv:1303.4441,pp.1–15,2013.[5]M.H.Smith,“Alearningprogramwhichplayspartnershipdominoes,”CommunicationsoftheACM,vol.16,no.8,pp.462–467,1973.[6]A.R.DaCruz,F.G.Guimaraes,andR.H.C.Takahashi,“Comparingstrategiestoplaya2-sideddominoesgame,”Proceedings-1stBRICSCountriesCongressonComputationalIntelligence,BRICS-CCI2013,pp.310–316,2013.[7]J.AndersonandJ.Varuzza,InternationalDominos.SevenHillsBooks,1991.[8]G.T.Heineman,G.Pollice,andS.Selkow,AlgorithmsinaNutshell.O'ReillyMedia,Inc,1ed.,2009.[9]T.FurtakandM.Buro,“RecursiveMonteCarlosearchforimperfectinformationgames,”IEEEConferenceonComputa-tonalIntelligenceandGames,CIG,2013.

Documentos PDF asociados:

Guillermo Angeris and Lucy Li - web.stanford.edu
Dans Les Bras De Zahir L Epouse Fugitive Lucy Monroe Jane ...
WWW.AQIAIRS - web.stanford.edu
GUILLERMO CABANELLAS (h) - tas-cas.org
Avner Greif - Stanford University
A TALE OF TWO CITIES. - Stanford University
GuíaPráctica% Empatía% - dschool-old.stanford.edu
Adam Bonica's CV - Stanford University
Mitochondrial Disease and Anesthesia - Stanford Medicine
JACK PETROVICH DVORKIN - Stanford University
STANFORD UNIVERSITY SPANLANG 15: Intermediate Oral ...
Mining of Massive Datasets - Stanford University
Uniform Hashing is Optimal - infolab.stanford.edu
Mudras of the Great Buddha - Stanford University
XPS Basic Data Analysis - Stanford University
El Apóstol Guillermo Maldonado es uno de los líderes más ...
LA UNCIÒN SANTA…GUILLERMO MALDONADO
Pdf ingenieria economica guillermo baca
Guillermo J. RUIZ-ARGÜELLES MD, FRCP
GUILLERMO DILTHEY - bdigital.unal.edu.co
Silicon Detectors Basic Concepts I1 - Stanford University
LULU GARCIA-NAVARRO, HOST - vhil.stanford.edu
Stanford Hospital & Clinics Antimicrobial Dosing Reference ...
VOCES ANÓNIMAS : SINIESTRO / GUILLERMO LOCKART
Guillermo J. RUIZ-ARGÜELLES MD, FRCP - ishworld.org
Dr. Guillermo Valencia Palomo 1. Datos personales
SEÑALIZACIÓN Y CÓDIGO DE COLORES Ing. Guillermo Bavaresco
descargar guillermo michel Aprende a aprender
Ferrara Guillermo - El Secreto De Adan - api.ning.com
Luis Guillermo Lumbreras Max Uhle y la tradición de ...
Stanford Health Care Antimicrobial Dosing Reference Guide
A Short History of South East Asia1 - Stanford University
Guillermo del Toro & Chuck Hogan - Yaxchel - BIENVENIDOS
Salud y sanidad divina guillermo maldonado pdf - idehikuza
EDWIN GUILLERMO CARO PARRA - intellectum.unisabana.edu.co
descargar guillermo michel Aprende a aprender - WordPress.com
Ingenieria Economica Guillermo Baca 8 Edicion Solucionario
DR. GUILLERMO FLORIS MARGADANT (1924-2002) - derecho.unam.mx