Sign in with Twitter

Username:

Stanford NLP Group @stanfordnlp Stanford, CA, USA

Computational Linguistics—Natural Language—Machine Learning—Deep Learning—Silicon Valley tech. @chrmanning—@jurafsky—@percyliang—@ChrisGPotts at @StanfordAILab

98 Following   94,621 Followers   6,783 Tweets

Joined Twitter 2/28/10


There are also substantial improvements to the CoreNLPClient. Full details of all the new and changed stuff can be… https://t.co/6uEZOlUSFkWe’ve just released Stanza v1.1.1, our #NLProc package for many human languages. It adds sentiment analysis, medica… https://t.co/BbrZYFHtzi
8/13
2020
I started Tech Policy Watch with the help of my research assistants and staff. Feel free to sign up ↘️… https://t.co/NryUdtUTEe
Retweeted by Stanford NLP GroupVery excited for @emilymbender & @alkoller's @BayAreaNLP talk tomorrow! https://t.co/7chhS8zuUW With more than 350… https://t.co/d1Zh90Ct09
Retweeted by Stanford NLP GroupCongrats to faculty affiliate @jurafsky for being selected as an inaugural recipient of the Hoffman-Yee Research Gr… https://t.co/oxxmyBRtds
Retweeted by Stanford NLP Group
8/12
2020
Stanza: Official Stanford #NLP Python Library for Many Human Languages https://t.co/Dh8YhrcDrf via @MarkTechPosthttps://t.co/tSI8b2KVLA
Retweeted by Stanford NLP Group“In a sense, it’s nothing short of miraculous.” #AI systems using a Mad Libs-style game start learning grammatical… https://t.co/Amx8RvO8ML
Retweeted by Stanford NLP Group
8/10
2020
When you’re a smart scientist! 😁 Reference from Daniel Jurafsky’s lecture slides. https://t.co/PhWbuL5PcS
Retweeted by Stanford NLP GroupThe Bias-Variance Trade-Off & "DOUBLE DESCENT" 🧵 Remember the bias-variance trade-off? It says that models perfor… https://t.co/WZZAUTw0F7
Retweeted by Stanford NLP GroupDan Jurafsky(@jurafsky) is the Professor & Chair of Linguistics & Professor of Computer Science at Stanford Univers… https://t.co/2FvH1S68Pg
Retweeted by Stanford NLP GroupChristopher Manning (@chrmanning) is the Director of Stanford Artificial Intelligence Laboratory (SAIL) & the Profe… https://t.co/b4IBs7qnvs
Retweeted by Stanford NLP GroupSome people to be aware of in #NLProc 🧵👇 https://t.co/GuBI6y1Itf
8/9
2020
There is a misconception that famous researchers in ML are innately brilliant. Most well known researchers I meet… https://t.co/U2UiWxQpDz
Retweeted by Stanford NLP Group @quadrismegistus @Ted_Underwood @GaryMarcus A machine can't learn that from word distributions alone. It's not as t… https://t.co/glYAqjlxLP
Retweeted by Stanford NLP GroupGoogle is apparently adding some kind of citation / information provenance links to individual statements in the kn… https://t.co/suWsgpePgN
Retweeted by Stanford NLP Group
8/8
2020
aaah my very first journal article just got published!! Huge thanks to collaborators Yulia Tsvetkov and @jurafsky,… https://t.co/dff0RkBMUS
Retweeted by Stanford NLP GroupEnroll now in our next professional AI course, Natural Language Understanding w/ @ChrisGPotts. Complete an original… https://t.co/6ausnbQnoV
Retweeted by Stanford NLP GroupThe videos and suggested readings for this Stanford Natural Language Understanding course look useful to understand… https://t.co/BM5LQlSl4G
Retweeted by Stanford NLP GroupNew! Do you use NaturalQuestions, TriviaQA, or WebQuestions? It turns out 60% of test set answers are also in the t… https://t.co/F4Kkv9vNJV
Retweeted by Stanford NLP GroupThere are few options for evaluating multilingual question answering outside of English, and especially few for ope… https://t.co/eiERoT2OiG
Retweeted by Stanford NLP Group @markus_zechner @omarsar0 I recommend https://t.co/PZ5FOscj67 on NLP in general. Linguistics wise, I am reading Emi… https://t.co/39zaesbeWi
Retweeted by Stanford NLP GroupIf you watched @lastweektonight 's last episode about history, I would encourage you to read @jurafsky, lucy li, p… https://t.co/hEZNqrhD84
Retweeted by Stanford NLP GroupExplore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning. Evan Zheran Liu, Aditi R… https://t.co/zkemcVzCV8
Retweeted by Stanford NLP Group @ccb @stanfordnlp not a dataset but naturalizing a programming language (https://t.co/v8UYJ3440W) is my favorite mt… https://t.co/aVi88K4p2P
Retweeted by Stanford NLP Group
8/7
2020
I'm writing a survey article on crowdsourcing for NLP. What are your favorite datasets that were created with Amaz… https://t.co/4laaufOZVB
Retweeted by Stanford NLP Group
8/6
2020
Our @stanfordnlp chatbot Chirpy Cardinal 🐦 won 2nd place in the #AlexaPrize finals! 🥈 It wouldn't have been possi… https://t.co/5hPLT5vcEk
Retweeted by Stanford NLP Group
8/4
2020
@pirotw がんばって ください
8/2
2020
Atticus Geiger & Alex Carstensen present work with me and @ChrisGPotts on neural networks learning and generalizing… https://t.co/etvmgh2QJ4
Retweeted by Stanford NLP Groupnlp modellerinin sentaktik, semantik gibi linguistic özelliklerini test etmek için probing:… https://t.co/tIkrKHQlDC
Retweeted by Stanford NLP Group
8/1
2020
Join us for the Conference on Computational Sociology, cohosted by @StanfordGSB. We'll showcase research that displ… https://t.co/mrVmiICTWm
Retweeted by Stanford NLP GroupWe’re delighted to release CoreNLP v4.1.0! This continues the v4 roll-out using UDv2 and “new” LDC English tokeniza… https://t.co/iUCnqSL0t6 @joseberlines Wait, that isn’t _our_ content. We do put our own content on open blogs, such as the SAIL Blog… https://t.co/lN44Vf9K4tWoah... My article featured by Stanford NLP 🙂 https://t.co/QUvRsvTWGw
Retweeted by Stanford NLP Group @IyalMozhi @subalalithaCN Through Universal Dependencies data, Stanza supports tokenization through dependency pars… https://t.co/39iIOojaho @ranaalisaeed Don’t know anything for that, sorry @ranaalisaeed No. What intents on what higher education dataset? @zehavoc @stanfordnlp @ethanachi @johnhewtt Separately, super impressed by @ethanachi with this work (and the beaut… https://t.co/nam3OVHLlX
Retweeted by Stanford NLP Groupreviewing deadline confession: I'm so jealous of people that can write complex papers in a perfectly simple, non pe… https://t.co/md5fOoJoBD
Retweeted by Stanford NLP Group5 NLP Libraries Everyone Should Know | by Pawan Jain | Jul, 2020 | Towards Data Science: spaCy, NLTK, Transformers,… https://t.co/5ElmVe1fWg"Incorporating Dialectal Variability for Socially Equitable Language Identification" https://t.co/qbW2LB5dkR https://t.co/2tF9Dzb16J
Retweeted by Stanford NLP Group
7/31
2020
@acnwala @storygraphbot Sorry, but at present Stanza doesn’t have any date time normalization – there are still a n… https://t.co/Loy0qwBJy9State-of-the-art #NLP models from R - RStudio AI BLog #RStats users can benefit from all the state-of-the-art Natu… https://t.co/lyLqE4aL0T
Retweeted by Stanford NLP GroupThe need for open data & benchmarks in modern ML research has led to an outpouring of #NLProc data creation. But… https://t.co/GrJWJbICqo
Retweeted by Stanford NLP Group👋 Introducing biomedical & clinical model packages for the Stanza #NLProc toolkit, including: - 2 bio UD syntactic… https://t.co/oobEhmh6md
Retweeted by Stanford NLP GroupWe thank💐 @chrmanning, Director of @StanfordAILab for giving his lecture on "Linguistic Structure Discovery with De… https://t.co/xgWxATG4GG
Retweeted by Stanford NLP Group🆕 We’ve extended Stanza with first domain-specific #NLProc models for biomedical and clinical medical English. They… https://t.co/5Mb1qAoVDV
7/30
2020
Hello #NLProc twitter! We are NLP with Friends, a friendly online NLP seminar series where students present ongoing… https://t.co/oipXHm2n4N
Retweeted by Stanford NLP Group @chrmanning @AveryAndrews @clark_kev @johnhewtt @ukhndlwl @omerlevy_ @ethanachi But as you note, it seems reasonabl… https://t.co/r75FnD9Qjs
7/29
2020
@alexandersclark @emilymbender @chrmanning Unsupervised text-only models can acquire syntactic regularities, which… https://t.co/YtNM4u7M70
Retweeted by Stanford NLP Group @brandontlocke Let us know if you need any help in handling other languages — even though everything defaults to En… https://t.co/abQodHNqB9
7/28
2020
I have a joke about NLP, but it's not funny unless NLP is recognized as a branch of Artificial Intelligence (P.S. w… https://t.co/YKdzJgAgFh
Retweeted by Stanford NLP Group
7/27
2020
@KoryHoopes When it comes to ML like this, I often have to compromise to a non-javascript-only solution. The tools… https://t.co/PEnkogKm5F
Retweeted by Stanford NLP Group @terrible_coder @wellformedness @nletcher @BayesForDays @SnorkelML Actually non-linear fns too- but we'll take "wei… https://t.co/JM80LVqjHJ
Retweeted by Stanford NLP Group
7/25
2020
Understanding the knowledge of language of recent neural models like BERT: “As these models get bigger, they self-o… https://t.co/uLOvQzy02RImportant new work in unsupervised learning of PCFGs 🧵👇 (okay, okay, we know people only do neural sequences these… https://t.co/697aqOR5QNFree From Stanford: Ethical and Social Issues in Natural Language Processing #kdnuggets #stanford #data #nlp https://t.co/tTVg8aNcnO
Retweeted by Stanford NLP Group
7/24
2020
@tDP59WNicJGG7iz It doesn’t really matter so long as you’re consistent, since it’s just a scaling factor: log_b a… https://t.co/gL52xGSAoHSyntactic abstractions emerge in neural networks trained via word prediction: cool paper by @chrmanning et al. Supp… https://t.co/KbglNX4wAv
Retweeted by Stanford NLP Group
7/23
2020
NERtwork is a workflow/set of scripts for producing networks of co-occurring entities within a set of documents (or… https://t.co/L5iwIB5FHIFor senior AI scientists open to moving to Germany: Alexander von Humboldt Professorship (for Artificial Intellig… https://t.co/R8pA8bNbMH
Retweeted by Stanford NLP Group @MiguelISolano Well, actually it’s published in Transactions of the Association for Computational Linguistics (TACL… https://t.co/GCwb5oqF84That's quite interesting and refreshing view on transformers based LM capabilities and limits. At least it's going… https://t.co/M6MqG6Gzns
Retweeted by Stanford NLP Groupat the risk of stating the bleeding obvious, isn't the better question to derive from this, "what does this say ab… https://t.co/oKEo9dmcKL
Retweeted by Stanford NLP Group
7/22
2020
@IntuitMachine @mhahn29 offers insight on that semantic gap. His paper suggests to me that the ELIZA effect of GPT3… https://t.co/VqejDkyslr
Retweeted by Stanford NLP Groupこれはいいぞ。NLPとethic. https://t.co/uDTlJ314BY #stanfordすごい #NLP #ethic
Retweeted by Stanford NLP Group"But how could ketchup be Chinese?" Taken from @jurafsky's book, The Language of Food https://t.co/oxp1z7YScR
Retweeted by Stanford NLP Group"Relevance-guided Supervision for OpenQA with ColBERT" Omar Khattab, @ChrisGPotts, @matei_zahariahttps://t.co/r6vFoXdMn0
Retweeted by Stanford NLP Group📙 Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Gene… https://t.co/xlTqo8nTqZ
Retweeted by Stanford NLP GroupA "tell-all" account of why improving @SemanticScholar #search is not as simple as you might think... Dealing with… https://t.co/GzfbcLBdld
Retweeted by Stanford NLP GroupOur fifth featured article from the June issue is titled Dependency locality as an explanatory principle for word o… https://t.co/TLA5LyU0yQ
Retweeted by Stanford NLP Group
7/21
2020
Looks like “Chief Prompt Writer” will become a role, given all the GPT3 tweets.
Retweeted by Stanford NLP GroupExcited to finally see this work analyzing U.S. history textbooks posted online, with co-authors @ddemszky,… https://t.co/HxhCtAyThh
Retweeted by Stanford NLP Group
7/19
2020
For anyone interested in #NLProc - CS224N, taught by the amazing @chrmanning from @stanfordnlp, is hands down the b… https://t.co/fkmcH8uxw1
Retweeted by Stanford NLP Group4 years old but this NLP & Deep Learning sound horn from @chrmanning rings just as true and pertinent now as ever https://t.co/yCyejmYQDm
Retweeted by Stanford NLP Group
7/18
2020
Predicting concepts as an intermediate representation results in accurate, interpretable, and intervenable models.… https://t.co/hzROKmXYld
Retweeted by Stanford NLP Group @frannnscf They seem to be available now….
7/17
2020
Practical NLP is all you need to get your NLP projects going. The reader learns foundational topics, core tasks, an… https://t.co/oqItXKhgmI
Retweeted by Stanford NLP Group
7/16
2020
NLP is going to be the most transformational tech of the decade!
Retweeted by Stanford NLP GroupWe wrote a longer version of the @huggingface🤗transformers paper (EMNLP demos). It goes through the library and mo… https://t.co/T6Gk1FyuEF
Retweeted by Stanford NLP GroupGreat work from @megha_byte @tatsu_hashimoto and @percyliang https://t.co/qfxNDo3hpB Identifying spurious correlat… https://t.co/Aq2q6lxyLF
Retweeted by Stanford NLP Group⚡️UPDATE - Super Duper NLP Repo ⚡️ Added another 52 notebooks bringing us to 233 total NLP Colabs. Thank you for c… https://t.co/0NLzLhq1sH
Retweeted by Stanford NLP Group📺 Recommended NLP talk of the day 📺 Christopher Potts presents an overview of how to improve natural language unde… https://t.co/VNwNEYbHgK
Retweeted by Stanford NLP GroupHow can we adapt to very different target distributions in a principled way? w/ @tengyuma @percyliang We show that… https://t.co/4VH0sosccD
Retweeted by Stanford NLP GroupHow can we be robust to changes in unmeasured variables such as confounders? @megha_byte shows that we can leverag… https://t.co/2aHrTnlpPH
Retweeted by Stanford NLP GroupGreat line of questions from the stanford cs 224 course. Eventual question : What might be the cause of these biase… https://t.co/kEjf1UGfS8
Retweeted by Stanford NLP Group @codexeditor @Conaw Yeah Stanford NLP is the default for this for sure. I remember this from the days I was really… https://t.co/C40MdkDbfp
Retweeted by Stanford NLP Group @DataSciBae I thought lectures at https://t.co/jHGfOZAsQQ were good. Focus is implications of deep learning for NLP… https://t.co/PvaMfR5WAU
Retweeted by Stanford NLP Group @DataSciBae this book on nlp by @jurafsky is one of the best textbooks i have ever read on any subject https://t.co/dQeGoJimwy
Retweeted by Stanford NLP Group
7/15
2020
Suffering from post-#acl2020nlp withdrawal? There are also lots of @percyliang, @tatsu_hashimoto, and other Stanfor… https://t.co/s9mrOqhN1r @RTomMcCoy Good choices, Tom!Thread about five #acl2020nlp papers that haven’t gotten the hype they deserve:
Retweeted by Stanford NLP Group1/ Crazy exp: take Resnet embedding of Imagenet as dataset A. Train linear predictor on A; get accuracy p. Now make… https://t.co/SGleuGVrvf
Retweeted by Stanford NLP GroupThere can be useful discussion on twitter, episode #127: @yoavgo, @tallinzen, @mmitchell_ai, and our own… https://t.co/TzrBiOXOfG
7/12
2020
ACL 2020 #acl2020nlp ends this week! If you didn't manage to attend all #KnowledgeGraph related talks - I comprised… https://t.co/XFRx4azcuL
Retweeted by Stanford NLP Group @yuhaozhangx @aclmeeting From @jurafsky on RocketChat: "The website will remain available, with videos and pointers… https://t.co/CR4oBvuEpu
Retweeted by Stanford NLP Group#acl2020nlp First ACL and what a brilliant experience! Thank you @aclmeeting for organizing this virtual conference… https://t.co/xzfvLxorqV
Retweeted by Stanford NLP Group @yoavgo @BayesForDays @wellformedness From what I've seen, the #NLProc community has a greater emphasis on ethics t… https://t.co/vdffuOJ1Rm
Retweeted by Stanford NLP GroupExcited to find that my 1st PhD publication (4 years ago) is now the 2nd most cited paper in EMNLP over past few y… https://t.co/AuofDT5Ajm
Retweeted by Stanford NLP GroupI just published Highlights of ACL 2020 https://t.co/1u4xBByou2 #acl2020nlp
Retweeted by Stanford NLP Group
7/11
2020
“We have to be mindful of whether the benefits of these heavy-compute models are worth the cost of the impact on th… https://t.co/fCzfwhcJ0v
Retweeted by Stanford NLP Group
7/10
2020

0